IBM Cloud Manager with OpenStack: z/vm Integration Considerations

Similar documents
Setting Up the IBM Cloud Manager with OpenStack Cloud Manager Appliance (CMA) on z/vm

Hints and Tips for Using IBM Tivoli Provisioning Manager with Linux on System z

What s New in Newton. Emily Hugenbruch, Advisory Software

Enabling z/vm for OpenStack (Support for OpenStack Havana Release)

Cloud on z Systems Solution Overview: IBM Cloud Manager with OpenStack

Several additional changes are provided with this APAR.

Getting Started With the IBM Tivoli Discovery Library Adapter for z/os

z/vm 6.3 Installation or Migration or Upgrade Hands-on Lab Sessions

z/vm 6.3 A Quick Introduction

ZVM20: z/vm PAV and HyperPAV Support

WebSphere Application Server Base Performance

HPE Digital Learner OpenStack Content Pack

IBM Tivoli Monitoring Agent Management Services Performance Considerations in a Virtualized Environment

z/vm Evaluation Edition

Chapter 7. Getting Started with Boot from Volume

Build Cloud like Rackspace with OpenStack Ansible

IBM Cloud Orchestrator Version User's Guide IBM

Virtual Security Zones

Virtual Security Zones on z/vm

Session zse4187 Virtual Security Zones on z/vm

Red Hat CloudForms 4.0

COMP 3400 Mainframe Administration 1

HPE HELION CLOUDSYSTEM 9.0. Copyright 2015 Hewlett Packard Enterprise Development LP

IBM Storage Driver for OpenStack Version Release Notes IBM

VSEC FOR OPENSTACK R80.10

Cloud on System z The Real Deal

Automatically Logging on a User at Linux System Boot time for Console Management

IBM Tivoli System Automation for z/os

IBM Cloud Orchestrator. Content Pack for IBM Endpoint Manager for Software Distribution IBM

BCS EXIN Foundation Certificate in OpenStack Software Syllabus

z/vm Systems Management Fundamentals

IBM Tape Manager for z/vm and DFSMSRMS z/vm in a TS7700 Tape Grid Environment

Z Networking in the Cloud Thomas Cosenza

Running Docker applications on Linux on the Mainframe

Security Gateway for OpenStack

Taking Your Power Systems Higher in the Cloud with VMware vrealize Automation

OpenStack Architecture and Pattern Deployment with Heat. OpenStack Architecture and Pattern Deployment using Heat Ruediger Schulze

z/vm Security and Integrity

Red Hat Virtualization 4.1 Product Guide

SAP on IBM z Systems. Customer Conference. April 12-13, 2016 IBM Germany Research & Development

IBM Single Sign On for Bluemix Version December Identity Bridge Configuration topics

Virtual Appliance User s Guide

IBM Systems and Technology Group. Abstract

Greg Daynes z/os Software Deployment

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide

Using the IBM DS8870 in an OpenStack Cloud Environment IBM Redbooks Solution Guide

VM Parallel Access Volume (PAV) and HyperPAV Support

Red Hat OpenStack Platform 10 Product Guide

Session V61. Systems Management on z/vm Christine Casey z/vm Development - IBM Endicott, NY. IBM System z Expo September 17-21, 2007 San Antonio, TX

z/osmf 2.1 Advanced Programming

z/os Data Set Encryption In the context of pervasive encryption IBM z systems IBM Corporation

KVM for IBM z Systems Limits and Configuration Recommendations

IBM Application Runtime Expert for i

DBaaS (Oracle and Open Source)

IBM Storage Driver for OpenStack Version Release Notes

IBM Systems Director VMControl

z/vm Live Guest Relocation - Planning and Use

CSL-WAVE. Virtualization Where IT Counts. Sharon Chen CSL International

ZVM17: z/vm Device Support Overview

Build your own Cloud on Christof Westhues

Red Hat CloudForms 4.6

Red Hat CloudForms 4.6-Beta

Installation Runbook for Apcera on Mirantis OpenStack

Installation runbook for Hedvig + Cinder Driver

IBM Cloud Manager with OpenStack -- self-service portal. User Guide

Cluster Server Generic Application Agent Configuration Guide - AIX, Linux, Solaris

Red Hat OpenStack Platform 13

AvePoint Cloud Governance. Release Notes

BIG-IP Virtual Edition and Linux KVM: Setup. Version 12.1

OpenNebula on VMware: Cloud Reference Architecture

VMware ESX ESXi and vsphere. Installation Guide

IBM Infrastructure Suite for z/vm and Linux: Introduction IBM Wave

SUBSCRIPTION OVERVIEW

Red Hat OpenStack Platform 13

Ensemble Enabling z/vm V6.2 and Linux for System z

Best Practices for WebSphere Application Server on System z Linux

IBM SecureWay On-Demand Server Version 2.0

Linux Installation Planning

Red Hat CloudForms 4.6

ForeScout Extended Module for IBM BigFix

Release Notes. IBM Tivoli Identity Manager GroupWise Adapter. Version First Edition (September 13, 2013)

Creating an IBM API Management Version 2.0 environment

Linux and z Systems in the Datacenter Berthold Gunreben

Using IBM Flex System Manager for efficient VMware vsphere 5.1 resource deployment

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

Installing VMware vsphere 5.1 Components

Xen and CloudStack. Ewan Mellor. Director, Engineering, Open-source Cloud Platforms Citrix Systems

getting started guide

IBM z Systems Development and Test Environment Tools User's Guide IBM

"Charting the Course... H8Q14S HPE Helion OpenStack. Course Summary

Service Portal User Guide

Ansible Tower Quick Setup Guide

Release Notes. IBM Tivoli Identity Manager Rational ClearQuest Adapter for TDI 7.0. Version First Edition (January 15, 2011)

z/vm Resource Manager

AvePoint Cloud Governance. Release Notes

IBM Storage Driver for OpenStack Version Release Notes

z/vm Single System Image and Guest Mobility Preview

z/vm Live Guest Relocation Planning and Use

Linux for zseries and VSE

Architecture and terminology

Transcription:

IBM Cloud Manager with OpenStack: z/vm Integration Considerations May 2016 IBM zgrowth Team Cloud & Smarter Infrastructure Mike Bonett Executive I/T Specialist

Special Notices This document reflects the IBM zgrowth Team Washington Systems Center understanding on many of the questions asked about the Integration of IBM Cloud Manager with OpenStack and z/vm. It was produced and reviewed by the members of the IBM zgrowth Team Washington Systems Center organization. This document is presented As-Is and IBM does not assume responsibility for the statements expressed herein. It reflects the opinions of the IBM zgrowth Team Washington Systems Center. These opinions are based on hands on experiences with IBM Cloud Manager with OpenStack. If you have questions about the contents of this document, please direct them to Mike Bonett (bonett@us.ibm.com). All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-ibm products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-ibm products. Questions on the capabilities of non-ibm products should be addressed to the suppliers of those products. Trademarks The following terms are trademarks or registered trademarks of International Business Machines Corporation in the United States and/or other countries: CICS, DB2, ECKD, IBM, MQSeries, Parallel Sysplex, System z, WebSphere, z/os. z/vm, z Systems. A full list of U.S. trademarks owned by IBM may be found at http://www.ibm.com/legal/copytrade.shtml. Microsoft, Windows, Windows NT and the Windows logo are registered trademarks of Microsoft Corporation in the United States and/or other countries. OpenStack is a registered trademark of the OpenStack Foundation UNIX is a registered trademark in the United States and other countries licensed exclusively through The Open Group. LINUX and Linux are registered trademarks of Linus Torvalds. Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States and/or other countries. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. Other company, product and service names may be trademarks or service marks of others. Acknowledgements Many thanks to the following people for reviewing this paper and providing input: Art Eisenhour, IBM zgrowth Washington Systems Center Carol Davis, IBM z Systems ATG SWAT Team Mike Sine, IBM zgrowth Washington Systems Center (www.ibm.com/support/techdocs) Page 2 of 15

Introduction 4 IBM Cloud Manager with OpenStack and z/vm Architecture 4 z/vm Directory Manager (DIRMAINT) 8 z/vm Systems Management Application Programming Interface (SMAPI) 9 xtreme Cloud Administration Toolkit (xcat) 9 Virtual Switches (VSWITCH) 11 Security Manager 12 Linux Source Images 12 Linux Deployed Instances 14 OpenStack Configuration Files 14 Other Considerations 15 Installation Verification Program (IVP) 15 Summary 15 (www.ibm.com/support/techdocs) Page 3 of 15

Introduction IBM Cloud Manager with OpenStack provides the ability to create and manage a Linux on z Systems cloud environment that runs under the z/vm operating system. It provides functions for provisioning virtual machines to create a cloud environment, and managing the service management lifecycle of virtual machines in that environment. Policy definitions for scaling deployments and placing instance across managed z/vm LPARs based on various criteria can be defined. The lifecycle for how long instances run can be established, after which it is automatically removed and its resources made available for other cloud deployments. IBM Cloud Manager with OpenStack provides basic guest provisioning by end users and basic pattern provisioning by cloud administrators. It is currently available only as a component of IBM Cloud Orchestrator 2.5. It executes on Linux on x86 or Power. It can manage to z/vm as well as distributed environments. Linux on z Systems images captured by IBM Cloud Manager with OpenStack can be provisioned as individual images by end users, as part of OpenStack Heat templates to create patterns (related combinations of images, software, and topology), and, with IBM Cloud Orchestrator, within a self-sevice catalog offering. For example, a pattern can contain database servers, a set of connecting web and application servers, and application components (business code and data). This paper provides technical details on how IBM Cloud Manager with OpenStack integrates with z/vm, and the considerations that result from the integration requirements. Understanding this integration and the associated considerations is required for proper planning purposes. This paper covers the architecture that must be enabled and configured for the integration to work, and provides information on the considerations to be aware of. The following links provide further details on IBM Cloud Manager with OpenStack: https://www.ibm.com/support/knowledgecenter/sst55w/welcome http://www.ibm.com/developerworks/servicemanagement/cvm/sce/ The IBM Cloud Manager with OpenStack versions covered in this paper are 4.2 and 4.3. Both run on Linux on x86 or Power Systems, and manage to the z/vm environment. Version 4.2 is based on the OpenStack Juno release, version 4.3 is based on the OpenStack Kilo release. With IBM Cloud Orchestrator version 4.3 is required, and is the focus of this paper. The statements in this paper are based on these versions, and will not speculate on possible changes in future product versions. IMPORTMANT NOTE: IBM Cloud Manager with OpenStack version 4.2 functionality used to be provided with the z/vm Cloud Manager Appliance (CMA), but this function has been removed as of March 2016 (the OpenStack services in the CMA remain and can be activated, but will not work with IBM Cloud Manager with OpenStack provided with IBM Cloud Orchestrator). IBM Cloud Manager with OpenStack and z/vm Architecture IBM Cloud Manager with OpenStack contains these major functions: (www.ibm.com/support/techdocs) Page 4 of 15

A deployment server, which contains a Chef server to both deploy the product components and (if desired) to deploy user software configurations are part of instance deployments. A controller server, which normally hosts the user interfaces and provides the authentication and OpenStack control services. OpenStack Compute node and network node functions. Compute nodes deploy and manage instances. Network nodes allocate network connectivity for the instances. IBM Cloud Manager with OpenStack integrates with the xextreme Cloud Administration Toolkit (xcat) function of z/vm 6.3 (NOTE: in this document, when xcat is used, it refers to the xcat functions in total; when XCAT is used, it refers to the xcat Management Node virtual machine, whose default name is XCAT). A basic IBM Cloud Manager with OpenStack implement is set of servers running on the x86 or Power platform, normally as guests under VMware or Linux Kernel Virtualization Management (KVM) on x86 or PowerVM on Power. The servers connect to the XCAT virtual machine on the z/vm platform: Several other server topologies are possible, which are documented in the IBM Cloud Manager with OpenStack Online Knowledge Center and the Enabling z/vm for OpenStack Guide. In a multiple LPAR Single System Image (SSI) environment, a single XCAT management node can be used to manage the LPARs. When IBM Cloud Manager with OpenStack is running on a distributed platform, the XCAT management node can be configured to connect to other LPARs via the xcat Hardware Control Point (ZHCP) guest (this will be covered in more detail later in the paper). However, a compute/network node server is required for each z/vm LPAR. IBM Cloud Manager with OpenStack has these requirements at the z/vm system level: z/vm 6.3 with the latest xcat maintenance is required. Only Linux instances (supported versions of Red Hat or SUSE) can be provisioned on z Systems, and are provisioned as z/vm guests. Several z/vm components must be properly configured to enable integration with IBM Cloud Manager with OpenStack and provisioning of Linux on z Systems instances under z/vm. The following diagram depicts these components: (www.ibm.com/support/techdocs) Page 5 of 15

The numbered components in the picture all play a role in the integration of IBM Cloud Manager with OpenStack with z/vm: 1. The z/vm Directory Manager (DIRMAINT), or a supported equivalent, provides a command driven interface to manage z/vm directory entries. This paper will only reference the use of DIRMAINT. 2. The z/vm Systems Management Application Programming Interface (SMAPI) provides programmatic access to DIRMAINT and z/vm system functions. 3. The Extreme Cloud Administration Toolkit (xcat) is an open source product for provisioning virtual machines. A version is provided with z/vm 6.3 and later releases to support provisioning Linux guests on z/vm. xcat consists of 2 virtual machines (their default names are XCAT and ZHCP), which receive requests either manually or via a REST API, and interact with DIRMAINT and SMAPI to carry out configuration and management requests. xcat also maintains an image repository for storing images created from existing Linux source image guests. 4. Virtual switches (VSWITCH) provide network connectivity between the management components, to allow command driven requests to come from the z/vm platform or other network connected locations. They also provide the networks on which newly provisioned instances will be connected to. 5. A Security Manager (such as RACF) provides additional resource protection beyond DIRMAINT and SMAPI authorizations. This is optional, but if it exists it must be configured to support this architecture. (www.ibm.com/support/techdocs) Page 6 of 15

6. Linux source images are existing Linux guests whose disk images are captured for deployment by IBM Cloud Manager with OpenStack. These guests have specific configuration requirements that will be discussed later in the paper. 7. Linux deployed instances are Linux guests created via deployment requests from IBM Cloud Manager with OpenStack. Configuration values from these components are used in the configuration files of several OpenStack functions. All of these components must be configured and working together properly to allow IBM Cloud Manager with OpenStack to provision to z/vm. The high level flow among the components to support this process is: DIRMAINT, SMAPI, vswitches, and xcat are defined and configured. IBM Cloud Manager with OpenStack is implemented. Information from the configurations components and the target network for deployment (names used for xcat components, vswitches, IP subnets, DNS information, etc.) are defined in the OpenStack components. Linux guests that will be used as source images are prepared and customized by the environment, or images are obtained from other sources. If the source image is to be created from a prepared Linux guest, xcat is used to capture the Linux guest volume as a disk image. The captured image is transferred to the IBM Cloud Manager with OpenStack platform. The image is defined to IBM Cloud Manager with OpenStack If specific OpenStack flavors (CPU/Storage/DASD combinations used for deployment) beyond the defaults are needed, they are defined. To deploy the image, IBM Cloud Manager with OpenStack invokes the appropriate OpenStack and xcat functions: o OpenStack creates the instance definition and interfaces with xcat. o The image (if it does not already reside there) is transferred to xcat. o Information on the instance networking configuration is transferred to xcat. o xcat invokes SMAPI and DIRMAINT commands to define the instance z/vm directory entry, virtual network interfaces (NICs) and minidisk storage. o xcat links to the instance storage and copies the disk image to the instance minidisk. o xcat starts the instance and customizes the network information. After IBM Cloud Manager with OpenStack is notified that the instance is running, it connects to the instance to execute any further configuration required (information passed from or invoked by user scripts and/or OpenStack Heat templates), such as adding storage, customizing the hostname, installing and activating software, etc.). The rest of this paper will highlight the key considerations for each component to ensure a working environment. For more information of these considerations beyond this paper, consult the documentation listed in this table (Detail means the publication has detailed configuration information, Suppl means it has additional configuration information specific to this architecture): (www.ibm.com/support/techdocs) Page 7 of 15

Document/ Publication DIRMAINT SMAPI Security Manager VSWITCH xcat Linux source images Linux deployed instances z/vm: Directory Maintenance Facility Tailoring and Administration z/vm: Systems Management Application Programming Guide z/vm: CP Planning and Administration z/vm: Connectivity z/vm: Enabling z/vm for OpenStack Primary Detail Suppl Detail Detail Detail Suppl Suppl Suppl Detail Detail In addition to these publications, knowledge of OpenStack, OpenStack commands, xcat, and xcat commands will be essential. OpenStack documentation can be found at http://docs.openstack.org/. xcat documentation can be found at http://xcat.sourceforge.net. z/vm Directory Manager (DIRMAINT) DIRMAINT must be configured and working and the XCAT virtual machine must be authorized to use it. If you are using another Directory Manager product, consult their publications to verify if they can provide support for the SMAPI functions required by OpenStack to determine if they will work within this integration. DIRMAINT is used to define system resources (CPU, memory, storage, network interface) for newly created instances. The storage DIRMAINT manages is defined as REGIONS (a range of space on specific volumes) and GROUPS (a set of REGIONS) in the DIRMAINT EXTENT CONTROL file. From these definitions DIRMAINT allocates minidisks. A GROUP must be defined in DIRMAINT for use by IBM Cloud Manager with OpenStack component to create the root volume storage for new instances. The group must contain either all ECKD volumes or all FBA volumes; it cannot contain a mixture of both. The FBA volumes can be emulated devices (EDEVs) defined as volumes. OpenStack refers to a DIRMAINT group as a disk pool. Only one disk pool can be used by OpenStack. New instances all use the same volume format, based on the disk pool for which OpenStack is configured. The volumes in the disk pool must be larger than the largest root volume image that will be deployed. For example, a 13GB ECKD disk image requires 3390-27s (or greater) volumes for successful deployment of new instances from that image. (www.ibm.com/support/techdocs) Page 8 of 15

3390-27s must be specified in EXTENT CONTROL to have a default cylinder size of 32760. The z/vm default is 30051; however, in SMAPI the default size is 32760 and cannot be changed. The mismatch prevents the ZHCP guest from writing to the volume and copying the image to the new instance. The simple workaround is to specify 32760 in EXTENT CONTROL; this works even if the 3390-27 volume actual size is 30051 cylinders. However, if possible, to avoid confusion those volumes should be changed to the actual size of 32760 cylinders. In some environments volumes this size are called 3390-32K and are defined in EXTENT CONTROL with a default size of 32760, which will also work. z/vm Systems Management Application Programming Interface (SMAPI) SMAPI allows programmatic access to configure the z/vm environment. It consists of multiple z/vm guest machines that act as protocol servers (receiving requests via protocols such as IPv4, IPv6, and IUCV) and worker servers (processing the requests). In this architecture xcat makes ssh connections to ZHCP, and ZHCP connects to SMAPI via IUCV to carry out requests. SMAPI interfaces to DIRMAINT and to the z/vm System to carry out its tasks. xcat must be authorized to use the SMAPI functions. This is done by authorizing the ZHCP guest to use the SMAPI functions. At times SMAPI needs to be restarted to resolve an issue or to ensure a change is fully absorbed. Recycling the SMAPI servers using the FORCE VSMGUARD/XAUTOLOG VSMGUARD procedure will recycle the XCAT and zhcp guests as well. If the XCAT and ZHCP guests are recycled, the compute services on the Compute Node connected to XCAT may also have to be recycled. xtreme Cloud Administration Toolkit (xcat) xcat is an open source product that supports provisioning virtual instances on a wide variety of software, including z/vm. Starting with z/vm 6.3, xcat is packaged as a part of z/vm. This xcat has been customized to only support provisioning on z/vm, and is the only version supported by SCO. The open source xcat code from the internet cannot be used in this configuration. xcat is packaged as two guests. The default names of these guests are: o XCAT, which is the xcat management node. User commands (via ssh login, graphical user interface, or a REST API) are executed here. o ZHCP (which stands for Z Hardware Control Point), which translates the user commands into the appropriate SMAPI and DIRMAINT requests for execution. The only access to ZHCP is through the XCAT guest. In a single LPAR environment, (or separate standalone LPAR environments), each LPAR has both a XCAT and ZHCP virtual machine. In a multi-lpar SSI environment, one LPAR has the XCAT (www.ibm.com/support/techdocs) Page 9 of 15

virtual machine, and each LPAR has a ZHCP guest and definitions that point back to the XCAT virtual machine. The XCAT and ZHCP guests run Linux but are locked down so that the Linux cannot be modified beyond actions related specifically for xcat. The XCAT and ZHCP attributes are all configured via the DMSSICNF COPY file on the MAINT 193 disk. Updates to this file are performed as a localmod, to preserve the contents when additional system maintenance is added. Via this file the following items are established: o The host names and IP addresses for XCAT and ZHCP. ZHCP has a single internal IP address; by default XCAT has 2 IP addresses, one for the internal network to communicate with ZHCP, the other for outside access for management. o The node names (how they are defined in XCAT) for the z/vm system, XCAT, and ZHCP. o The vswitch(es) and associated OSA devices to be used. o The DASD volumes that will be used to build the XCAT image repository. The volumes: o Must be all the same format (ECKD or FBA). They do not have to match the format that will be used to deploy new instances. o Must be CP formatted. o Must not be in the DIRMAINT EXTENT CONTROL file. XCAT will add them to that file and create appropriate REGION and GROUP entries. When they are first added the XCAT guest formats them for Linux usage, creates a logical volume (LVM) with these devices, and mounts the logical volume at the /install mount point. This initial process can take several hours depending on the number and size of volumes specified. The logical volume size can be increased later if needed by adding additional DASD volumes and restarting xcat. Any new volumes found will be Linux formatted and their space will be added to the logical volume. Two user IDs are provided to administer xcat: 1. An ID to access the XCAT guest via ssh. It is defined in the DMSSICNF COPY XCAT_MN_admin parameter and has a default value of mnadmin. This ID can ssh to the XCAT guest and issue xcat commands, but does not have root authority so it cannot modify many of the system configuration and log files. 2. An ID to access the XCAT graphical user interface (UI) via browser. The default ID is admin. It provides access to the xcat management menus where information on defined resources can be added or modified. A key feature of this interface is the ability to execute commands or shell scripts (which can be uploaded from the location the browser is running) with root authority. This feature becomes important for validation and debugging purposes. The XCAT guest must have available disk space to function properly. If the root filesystem reaches 100% full, xcat will not be able to provision new instances, and many xcat commands will fail. If the /install filesystem hits 100% full, no more images can be captured. These filesystems should be monitored, and periodic cleaning of the root filesystem performed. The main culprits with potential to full up the root filesystem are: o /var/log/messages o Log files in the /var/log/httpd directory (www.ibm.com/support/techdocs) Page 10 of 15

o /etc/xcat/auditlog.sqlite /var/log/messages and the files in the /var/log/httpd directory must be initially deleted via the xcat UI, as their default permissions do not allow the xcat ssh ID to delete them. However, via the UI the file permissions can be changed if desired. The /etc/xcat/auditlog.sqlite grows because the default setting for xcat is to log all requests and actions. To prevent this file from growing requires this setting to be disabled. The procedure is documented in the Enabling z/vm for OpenStack publication. The ZHCP guest must also have available disk space. The only way to check its space is via ssh from XCAT, using the xcat user interface. The main culprit with potential to fill up its root filesystem is /var/log/messages. With the most recent xcat z/vm maintenance, exits are provided to automatically clean up the log files to prevent the disks from filling up. How to enable the exits is documented in the z/vm: Systems Management Application Programming publication. Virtual Switches (VSWITCH) Virtual switches (VSWITCHes in z/vm terminology) provide networking connectivity among z/vm guests and outside of the z/vm LPAR. Virtual switches have several considerations in this configuration: o Between the XCAT and ZHCP guests, a private network using a dedicated VSWITCH (XCATVSW1) is defined by default. No other guests should be associated with this switch. o Between XCAT and the management network, (the path between the XCAT guest and the SCO servers) XCATVSW2 by default is the switch used. o For the network that the deployed instances will connect to, multiple vswitches can be defined. For initial configurations, using the same vswitch and subnet as is used by the XCAT management network is recommended; xcat must be able to reach the new instances. Here is the default recommended starting connectivity architecture using the vswitches: (www.ibm.com/support/techdocs) Page 11 of 15

Other working vswitch configurations are possible, such as a separate data network for the deploy instances application functions, but are beyond the scope of this paper. Additional examples are shown in the Enabling z/vm for OpenStack guide. The key requirements are: The XCAT guest must be able to talk to the ZHCP guest. When IBM Cloud Manager with OpenStack is deployed on distributed platforms, the controller and compute node for z/vm must be able to reach the XCAT guest on z/vm. The XCAT guest must be able to talk to newly provisioned guests on at least one NIC interface via a static IP address. When defining the vswitches the following considerations apply: o The vswitches used in this architecture must be defined as layer 2 (Ethernet) switches. o The OSA device address associated with the vswitch must be on port 0 (NOTE: this limitation is a known requirement). o The default configuration of the XCATVSW2 switch defines it as VLAN unaware. If a VLAN ID is required the switch definition must be changed. o The vswitch information (switch names, device addresses, VLAN IDs) will be required for customizing the IBM Cloud Manager with OpenStack controller and compute node configuration files. Security Manager If a security manager such as RACF is present in the environment, it must be configured to support this configuration. The main areas of conflict: o The ZHCP guest requires write access to the storage of the new instance, to copy the image onto the volume. Via the security manager, ZHCP must be enabled to link to these minidisks. For example, with RACF, the recommendation is to give ZHCP Operations authority. o The ZHCP guest needs to be made exempt from spool checking, so that it can transfer files to various virtual machines. The z/vm Systems Management Application Programming guide (version SC24-6234-08) documents the required commands to run. o Every new instance must be authorized to connect to the VSWITCH(es) defined for their use. For example, with RACF the suggested method is to delete the profile for that vswitch and let the z/vm Control Program manage the access authority. Linux Source Images Linux guests from which disk images are captured for use by IBM Cloud Manager with OpenStack are called source images. They have specific configuration requirements that are detailed in the Enabling z/vm for OpenStack Guide. The following summarizes the required planning: A supported Red Hat Enterprise Linux or SUSE Enterprise Linux version must be used. (www.ibm.com/support/techdocs) Page 12 of 15

The guest must have a single disk volume, in either fixed block architecture (FBA) or extended count-key-data (ECKD) format. No other storage configuration is supported. It is recommended that the disk size in cylinders or blocks be on a GB boundary, as OpenStack supports only GB boundary root volume sizes when deploying new images. However, it is possible to deploy the volume as the same size with a specific OpenStack configuration; in addition, with z/vm PTF UM 34427 xcat can resize ECKD images that are smaller than the requested deployment volume size to match that size. The virtual address of the disk volume must be 0100. If the disk volume has a single partition, it must be a non-lvm partition mounted at the root of the Linux filesystem (mounted at /). If the disk volume has more than one partition: o The partition mounted at the root of the filesystem (/) must be non-lvm. o The disk image cannot be resized at deployment; it must be deployed with the same number of cylinders (ECKD) or blocks (FBA) as it was created with. Any defined network interfaces (NICs) must have a virtual address above 1000. IPv6 must be disabled. The size of the root disk volume will impact deployment time for new instances. The recommendation is to keep the volume size below 15 GB. Observational experiences have seen the creation time for guests with root volumes of 5-22 GB taking between 5 and 20 minutes on a variety of z Systems hardware and storage platforms (not including the time mentioned above for the initial deployment of an image). These are not exhaustive performance tests, just observations. Two script functions must be installed and configured on the source image guest to support initial customization of instance created from the source image: o xcatconf4z, to allow xcat to access the instance during initialization. This script is copied from the XCAT guest machine. o cloud-init, an open source initialization script package that is used by OpenStack to access the instance during initialization. This is downloaded from https://launchpad.net/cloud-init (versions 0.7.4 and 0.7.5 have been validated with this configuration). The Linux guest must have internet connectivity during the installation and customization of the cloud-init scripts. The installation has to connect to various websites to download any required Perl components. If internet connectivity is not possible, a local repository can be set up with a copy of the internet repository and used. Disk images that are created must be deployed on the same DASD types as the source image guests: o ECKD disk images can only be deployed on ECKD volumes o FBA disk images can only be deployed on FBA volumes If not sure about the image format, the first 36 bytes of the image file describes the disk type and target volume size. This can be seen using the Linux head command. For example: [/]# head -c 36 0100.img xcat CKD Disk Image: 17477 CYL (www.ibm.com/support/techdocs) Page 13 of 15

There must be enough space on the XCAT guest image repository (the /install directory) to hold the captured images. Some compression is performed so that the stored disk image is smaller than the volume size. Linux Deployed Instances Linux guests deployed from captured disk images are given z/vm directory names based on an OpenStack nova configuration file (/etc/nova/nova.conf) parameter (instance_name_template). The general format is a static prefix with some number of generated characters, with the total length not to exceed 8. The default is abc%04x, which translates to the first three characters are ABC, the next 4 are generated from hexadecimal characters ; names such as ABC0001 or ABC005D will be assigned to the instances. Customize this value to avoid conflicts with existing directory entry names. By default a random Linux hostname is assigned to a new instance. If a specific hostname is desired, the necessary commands to set the hostname must be placed in either the user data script (when deploying from the IBM Cloud Manager with OpenStack user or administrative interface) or in an OpenStack Heat template (when deploying a template pattern from the administrative interface). OpenStack Configuration Files IBM Cloud Manager with OpenStack uses OpenStack components for provisioning and management of the cloud environment. Details on OpenStack and its components can be found at https://www.openstack.org. The installed components each have configuration files containing parameters related to instance deployment on z/vm: o Nova (instance deployment and management): /etc/nova/nova.conf o Glance (image management): /etc/glance/glance-api.conf, /etc/glance/glance-cache.conf o Neutron (network management): /etc/neutron/neutron.conf, /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini, /etc/neutron/plugins/zvm/neutron_zvm_plugin.ini o Cinder (storage management): /etc/cinder/cinder.conf Some of these parameters are set based on values used during the controller and compute node installation. Some parameters have to be added based on the planned configuration. The Enabling z/vm for OpenStack publication provides more detailed information for each required parameter. If there is a network address translation (NAT) environment between the network the compute node server for z/vm is on and the network z/vm and xcat or on, use the my_ip parameter in /etc/nova/nova.conf to set the IP address xcat should use to connect back to the compute node. OpenStack sends the my_ip value to xcat during a nova boot action; if my_ip is not set OpenStack will send the non-nat IP address and xcat will not be able to connect, causing the deployment to fail. Setting my_ip to the NAT address resolved this problem. (www.ibm.com/support/techdocs) Page 14 of 15

The following OpenStack constraints apply when managing the z/vm Linux environment using IBM Cloud Manager with OpenStack: OpenStack does not discover running guests on Linux; it can only manage guests it has deployed. Linux guest source images must be first created on Linux on z Systems through manual or other methods (for example, using IBM Wave for z/vm) and then customized and imported via xcat. The first deployment of an instance from an image can take a longer time than subsequent deployments, due to the network time transferring the image from the compute node to xcat. That first deployment has been observed to take as much as 30-40 minutes; subsequent deployments, since the image is already in the xcat repository, are much shorter. If xcat is ever restarted the neutron-zvm-agent service on the compute node must be restarted, as this service initializes network information for the OpenStack nova function and the XCAT managed node. Other Considerations Installation Verification Program (IVP) After the IBM Cloud Manager with OpenStack controller and compute nodes are installed, it is strongly recommended that the Installation Verification Program (IVP) be run to validate their OpenStack settings with the DIRMAINT, SMAPI, VSWITCH, and xcat configurations. The IVP can be downloaded from http://www.vm.ibm.com/download/packages/descript.cgi?zxcativp. The steps to use the IVP are documented in z/vm: Enabling z/vm for OpenStack. The IVP will highlight any mismatches found between the OpenStack definitions and the z/vm component definitions, so that they can be corrected. Summary This paper has highlighted the key components required to ensure a successful integration of IBM Cloud Manager with OpenStack with z/vm, along with the important considerations for each component involved. Careful configuration planning is required. This paper can server as a starting point, with more details provided in the documentation and links noted in the relevant sections. (www.ibm.com/support/techdocs) Page 15 of 15