HPE Virtualized NonStop Deployment and Configuration Guide

Size: px
Start display at page:

Download "HPE Virtualized NonStop Deployment and Configuration Guide"

Transcription

1 HPE Virtualized NonStop Deployment and Configuration Guide Abstract This guide provides an overview of HPE Virtualized NonStop (vns) and describes the tasks required to deploy a vns system in an OpenStack private cloud or in a VMware vsphere based virtualization environment. This guide is intended for personnel who are OpenStack or VMware administrators or have completed Hewlett Packard Enterprise training on vns system support. Part Number: Published: March 2018 Edition: L17.02 and subsequent L-series RVUs

2 Contents About This Document...7 Supported Release Version Updates (RVUs)... 7 New and Changed Information... 7 Publishing History... 8 Introduction to HPE Virtualized NonStop (vns)...9 Core Licensing and NonStop Dynamic Capacity (NSDC) Deployment environment for vns systems...13 OpenStack services for vns deployment...14 Virtualized NonStop System interconnect using RoCE...15 RoCE configuration requirements Ethernet switches and VLAN considerations Virtualized NonStop hardware and software requirements...15 Supported Virtual Machines (VMs) for vns Virtualized NonStop System Console (vnsc) Virtualized NonStop CPU (vns CPU)...16 Requirements for vns CPUs...17 Considerations for hyperthreads Memory consideration for Ubuntu NonStop Virtualized CLuster I/O Modules (vclims)...17 IP vclim and Telco vclim...17 Supported network interface configuration (NIC) options...19 Supported NICs for SR-IOV and PCI-Passthrough...20 NIC interfaces for IP vclim and Telco vclim...20 Storage vclim Storage vclim virtio network interfaces...21 Requirements for Storage vclims Simplified Logical Unit Number (LUN) approval LUN Manager commands for virtualized environments Managing Virtualized NonStop (vns)...29 Virtualized NonStop Deployment Tools...29 Features of Virtualized NonStop Deployment Tools...30 Relation of Virtualized NonStop Deployment Tools to OpenStack Services Fault zone isolation options for vns...31 Flavor management for Virtualized NonStop virtual machines Planning tasks for a Virtualized NonStop system in an OpenStack private cloud Mandatory prerequisites for a vns system in an OpenStack private cloud Configuring a Virtualized NonStop system for deployment Configuring the Host OS on the compute nodes (ConnectX-4) Contents

3 Configuring OpenStack (ConnectX-4) Installing Virtualized NonStop software and tools on OpenStack Obtaining Virtualized NonStop software for OpenStack Sourcing in an OpenStack resource file...43 Installing Virtualized NonStop Deployment Tools on Ubuntu...44 Installing Virtualized NonStop Deployment Tools on Red Hat OpenStack Platform (RHOSP) Creating OpenStack service and user on RHOSP...47 Creating MySQL database on RHOSP Installing PIP for RHOSP Configuring control node configuration on RedHat Open Stack platform Importing Virtualized NonStop images into Glance (OpenStack) Creating the Virtualized NonStop system flavors...53 Create the vnsc...55 Installing Virtualized NonStop software and tools on VMware Obtaining Virtualized NonStop software for VMware...56 Minimum configuration requirements for the vns VMware environment...56 Verify that the requirements for vns on VMware are met...57 Fault isolation levels for vns on VMware Import Virtualized NonStop images into the VMware environment...59 Predeployment hardware tasks for VMware environment Failover support for vns network interface cards (NICs) in a VMware environment Import vns package and set up Orchestrator...63 Plan and create the vns system for a VMware environment Supported attributes for JSON specification file for VMware System specification attributes in the JSON file for VMware CPU specification for VMware (JSON file) CLIM specification for VMware (JSON file) Storage volume specification for VMware (JSON file) Creating a Virtualized NonStop System (vnsc) for VMware...70 Managing vns tasks on VMware...72 Deploying the Virtualized NonStop System Post-deployment procedures for vclims...85 Configuring a provisioned Virtualized NonStop system...87 Booting the vns system...92 vns administrator tasks Scenarios for vns administrators Managing vns resources...94 Scenarios that prompt reprovision of resources...94 Save or load vns configuration in the Create System workflow Shutting down the vns system and vclims Reviewing maximum transmission unit (MTU) in OpenStack Troubleshooting vns problems on OpenStack...99 Virtual Machines shut down during deployment...99 vns Deployment fails with a timeout waiting for disks to be created...99 Contents 3

4 Collecting vclim crash dumps and debug logs vclim is unresponsive at the Horizon console Using the vclim serial log to assist with troubleshooting Debugging hypervisor issues Networking issues for vclims Issues with HSS boot, reload, or CPU not being online Troubleshooting vns problems on VMware vsphere vcenter server disconnects ESXi hosts from the Datacenter or other license expiration issues cause a disconnect to the Datacenter Websites Support and other resources Accessing Hewlett Packard Enterprise Support Accessing updates Customer self repair Remote support Warranty information Regulatory information Documentation feedback Creating a Virtualized NonStop System console (vnsc) Prerequisites for creating the Virtualized NonStop System Console (vnsc) Creating a vnsc vns OpenStack CLI commands clim network add clim network remove clim remove clim reprovision cpu remove cpu reprovision flavor list nsk-volume remove nsk-volume reprovision system config save system create system delete system expand system list system reprovision system show Horizon interface for vns Project Dashboard for vns systems System Details Overview Tab CPUs Tab CLIMs Tab Contents

5 Volumes Tab Admin Dashboard for vns systems Launch System workflow Expand System workflow Reprovision CLIM workflow Reprovision CPU workflow Reprovision Volume workflow Remove CLIM workflow Remove CPU workflow Remove Volume workflow Delete System action Create Flavor workflow Using ELK for NonStop and Virtualized NonStop event logs NonStop events and ELK Requirements and tested environment for ELK setup Mandatory prerequisites for setting up ELK ELK Installation and Configuration Adjusting the Java heap memory Code Example syslog-type filter Verify Elasticsearch and Kibana installation Code Example Prospector configuration for Virtualized NonStop running in an Ubuntu OpenStack cloud Using Kibana in a Linux single server environment Supported OFED drivers and HCA firmware by RVU Warranty information Belarus Kazakhstan Russia marking Turkey RoHS material content declaration Ukraine RoHS material content declaration Contents 5

6 Copyright 2018 Hewlett Packard Enterprise Development LP Notices The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein. Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR and , Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise website. Acknowledgments Intel, Itanium, Pentium, Intel Inside, and the Intel Inside logo are trademarks of Intel Corporation in the United States and other countries. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Adobe and Acrobat are trademarks of Adobe Systems Incorporated. Java and Oracle are registered trademarks of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group.

7 About This Document This guide provides an overview of HPE Virtualized NonStop (vns) and describes the tasks required to deploy a vns system in an OpenStack private cloud or in a VMware vsphere based virtualization environment. This guide is intended for personnel who are OpenStack or VMware administrators or have completed Hewlett Packard Enterprise training on vns system support. Supported Release Version Updates (RVUs) This publication supports L17.02 and all subsequent L-series RVUs until otherwise indicated in a replacement publication. New and Changed Information New and Changed Information in Added or updated the following topics: Installing Virtualized NonStop software and tools on VMware (new chapter with several new topics) Troubleshooting vns problems on VMware (new chapter) Introduction to HPE Virtualized NonStop (vns) Core Licensing and NonStop Dynamic Capacity (NSDC) Simplified LUN approval Deployment environment for vns systems Deploying the Virtualized NonStop System Features of Virtualized NonStop Deployment Tools IP vclim and Telco vclim RoCE configuration requirements Planning tasks for a Virtualized NonStop system in an OpenStack private cloud Configuring OpenStack (ConnectX-4) Configuring the Host OS on the compute nodes (ConnectX-4) Installing Virtualized NonStop software and tools on OpenStack Installing Virtualized NonStop Deployment Tools on Ubuntu (minor reference changes) Installing Virtualized NonStop Deployment Tools on Red Hat OpenStack Platform (RHOSP) (minor reference changes) Booting the vns system vns OpenStack CLI commands Horizon interface for vns About This Document 7

8 Using ELK for NonStop and Virtualized NonStop event logs Supported OFED drivers and HCA firmware by RVU New and Changed Information in Updated the following topics: Introduction to HPE Virtualized NonStop (vns) Core Licensing Deployment environment for vns systems Deploying the Virtualized NonStop System Features of Virtualized NonStop Deployment Tools Installing Virtualized NonStop Deployment Tools on Red Hat OpenStack Platform (RHOSP) Virtual Machines shut down during deployment Using the vclim serial log to assist with troubleshooting Shutting down the vns system and vclims vns OpenStack CLI commands Horizon interface for vns Using ELK for NonStop and Virtualized NonStop event logs Supported OFED drivers and HCA firmware by RVU New and Changed Information in Removed a reference to a document. New and Changed Information in This is a new guide. Publishing History Part Number Product Version Publication Date N.A. March N.A. August N.A. July N.A. March Publishing History

9 Introduction to HPE Virtualized NonStop (vns) HPE Virtualized NonStop (vns) expands the NonStop system family by introducing virtualization to NonStop. Virtualization lets you create a virtual machine (VM) from a physical resource (such as an Intel Xeon based physical server) and share that resource with other VMs and a host operating system (OS). A guest OS runs in each VM. In the case of vns, the guest OS is the NonStop software stack running on VMware vsphere ESXi or Linux Kernel-based Virtual Machine (KVM) hypervisors. The hypervisors create an emulated hardware environment for the VMs that run a guest OS such as NonStop. The hypervisor provides a consistent interface between the VMs and the physical hardware as shown in Figure 1. Figure 1: Virtualization example Virtualized NonStop is cloud-ready and provides a new deployment tool that lets you specify the VMs and deploy them in a private cloud managed by VMware or OpenStack. Table 1 describes the vns characteristics and Figure 2 shows an example of vns VMs running in a private cloud. Table 1: Characteristics of Virtualized NonStop Deployment Processor/Processor model Supported RVU Private cloud using OpenStack Mitaka/Newton distributed under Ubuntu, Red Hat OpenStack Platform (RHOSP), or VMware vsphere Intel Xeon x86 processors running in 64-bit mode L17.02 and later RVUs Table Continued Introduction to HPE Virtualized NonStop (vns) 9

10 Virtualized environment KVM hypervisor (included with Ubuntu and with RHOSP) The VMware vsphere stack comprises virtualization, management, and interface layers. The two core components of vsphere are VMware ESXi and VMware vcenter Server. ESXi is the virtualization platform where you create and run VMs vcenter Server is the service through which you manage multiple hosts connected in a network and pool host resources For more information about VMware, see Installing Virtualized NonStop software and tools on VMware Virtual Machines (VMs) These NonStop resources are supported as VMs: NonStop Virtualized CPUs (vns CPUs) Enterprise-Edition (also known as high-end) and entry-class options are supported Virtualized NonStop System Console (vnsc) 1 vnsc is required NonStop Virtualized CLuster I/O Modules (vclims) up to 56 vclims are supported For more information, see Supported Virtual Machines (VMs) for vns on page 16 IP CLIM and Telco CLIM networking Software (OpenStack or VMware environment) Supports VLANs and VXLANs for virtio, vmxnet3, SR-IOV and PCI-passthrough network interfaces Ubuntu LTS and RHOSP 10.0 distros of OpenStack Supported OpenStack releases are Newton and Mitaka VMware vsphere 6.5 or later and vrealize Orchestrator 7.3 or later Table Continued 10 Introduction to HPE Virtualized NonStop (vns)

11 Software delivery (OpenStack environment) OpenStack environment: Custom SUT in QEMU Copy on Write (QCOW2) format for initial deployment Regular SUT for subsequent software updates QCOW2 image for vclims for vclim initial deployment (CLIM DVD for subsequent CLIM software updates) Virtualized NonStop deployment tools for OpenStack ISO image for NonStop System Console DVD ISO image for Halted System Services (HSS) Software delivery (VMware environment) VMware enviroment: SUT in VMDK format for initial deployment. VMDK image for vclims for vclim initial deployment (CLIM DVD for subsequent CLIM software updates) Virtualized NonStop deployment workflow for VMware vrealize Orchestrator vns Deployment for VMware ISO image for NonStop System Console DVD ISO image for Halted System Services (HSS) System interconnect Clustering Expand networking Storage Events can be sent to an Elasticsearch, Logstash, and Kibana (ELK) environment for monitoring and analysis (L17.08 and later RVUs) 40 Gbps RoCE (RDMA over Converged Ethernet) Supports native RoCE clustering between high-end Virtualized NonStop systems Supports connectivity to NonStop X and NonStop i systems using Expand-over-IP Supports several storage options, including SAS drives, HPE StoreVirtual Virtual Storage Appliance (VSA), and storage arrays See Using ELK for NonStop and Virtualized NonStop event logs Table Continued Introduction to HPE Virtualized NonStop (vns) 11

12 NonStop Dynamic Capacity (NSDC) Software Core Licensing 2-, 4-, and 6-core software licensing options (high-end) As of L18.02 and later RVUs, NSDC is a new feature available on vns systems which enables temporary scale up of vns CPUs of a system to handle temporary spike in workload. For more information, see the NonStop Core Licensing Guide. A core license file is required for Virtualized NonStop and clustering. See Core Licensing and NonStop Dynamic Capacity (NSDC) on page 13 1-core software license (entry-class) Minimum Development system Requires two compute nodes that can be provisioned for two vns CPU VMs, two Storage vclim VMs, two IP vclim VMs, and one vnsc VM 12 Introduction to HPE Virtualized NonStop (vns)

13 Figure 2: VMs of a Virtualized NonStop system deployed in a private cloud For more information, see Deployment environment for vns systems on page 13. Core Licensing and NonStop Dynamic Capacity (NSDC) There are core licensing requirements for Virtualized NonStop and RoCE clustering. As of L18.02 and later RVUs, NonStop Dynamic Capacity (NSDC) is a new feature available on vns systems which enables temporary scale up of vns CPUs of a system to handle temporary spike in workload. For more information about core license requirements or NSDC, see the NonStop Core Licensing Guide. Deployment environment for vns systems Deploying and hosting vns in a private cloud environment is supported in these environments Core Licensing and NonStop Dynamic Capacity (NSDC) 13

14 Ubuntu with OpenStack Mitaka or Newton Release RedHat OpenStack Platform 10.0 VMware vsphere Hypervisor (ESXi) 6.5 and vcenter server 6.5 The vns system uses the OpenStack services described in OpenStack services for vns deployment on page 14 and Figure 3 shows these services. NOTE: OpenStack services are not used for VMware. OpenStack services for vns deployment The vns system uses these OpenStack services for deployment. OpenStack service Nova compute service Keystone identity service Glance image service Neutron networking service Cinder block storage service Horizon dashboard service Function Supports an API to instantiate and manage VMs on KVM Provides authentication services Manages VM images, including querying/updating image metadata and retrieving actual image data Provides network connectivity and IP addressing for VMs managed by the Nova compute service Provides an API to instantiate and manage block storage volumes Provides web-based user interfaces for creating, allocating, and managing OpenStack resources within a cloud Figure 3: OpenStack services for vns deployment 14 OpenStack services for vns deployment

15 Virtualized NonStop System interconnect using RoCE A Virtualized NonStop system uses RDMA over Converged Ethernet (RoCE) as the system interconnect fabric similar to how an InfiniBand fabric is used in NS7 systems. vns uses Single Root I/O Virtualization (SR-IOV) technology to share a single RoCE NIC with multiple NonStop VMs running on the same physical server. RoCE clustering of High-end Virtualized NonStop systems is supported. For more information, see the NonStop X Cluster Solution Manual. RoCE configuration requirements A vns system requires RoCE v2. RoCE requirement for ConnectX-4 HPE InfiniBand EDR/Ethernet 840QSFP28 Adapter in the compute nodes That can provide Two 40 Gbps Ethernet ports Drivers that support RoCE and ConnectX-4 Ethernet switches 40 Gbps Ethernet ports Support for Data Center Bridging (DCB) protocols, specifically IEEE 802.3x Global Pause to provide buffer management for the Ethernet switches Two independent interconnect fabrics Two ports on the Ethernet host adapter card must be connected to two separate Ethernet switches. The two switches function as independent interconnect fabrics to provide fault tolerance. Ethernet switches and VLAN considerations The Virtualized NonStop CPUs and vclims communicate through a Virtual LAN (VLAN) configured on top of the RoCE fabrics. The VLAN enforces network traffic isolation for security and provides Quality of Service (QoS) for the RoCE traffic between VMs. Virtualized NonStop hardware and software requirements For more information about the hardware and software requirements for vns, contact your HPE representative. Virtualized NonStop System interconnect using RoCE 15

16 Supported Virtual Machines (VMs) for vns Virtualized NonStop System Console (vnsc) The vnsc provides access to the OSM tools for Virtualized NonStop system management. vnsc VM vnsc VM characteristics Default is one vnsc per instance and the vnsc instance must run in a licensed compute node. Runs Windows Server 2012 R2 or Windows Server 2016 in a VM. Illustration is for clarity and is not an actual representation Requires a minimum of four virtual hyperthreads (for example, two hyperthreaded physical cores with two hyperthreads per core). Dedicated pinning is not required. Requires a minimum of 8 GB virtual memory (One GB huge pages are not required but offer better performance if selected). Virtualized NonStop CPU (vns CPU) The vns CPU provides the vns application and database compute workload and has these characteristics. vns CPU VM Characteristics of vns CPU VM Runs as a VM in an Intel Xeon-based server as long as hardware requirements are met. Up to 16 vns CPUs per vns system are supported. vns CPUs in the same system must be deployed in different physical servers for fault tolerance. Each vns CPU in a vns system should be configured identically as all the other instances. Cores are dedicated (pinned) and isolated to ensure deterministic timing, fault isolation, and performance. Core options: Not an actual representation 2-, 4-, and 6-cores (high-end) One-core (entry-class) 16 Supported Virtual Machines (VMs) for vns

17 Requirements for vns CPUs vns CPU VM vns CPU (Entry-class) vns CPU (High-end) All vns CPU types Resource Requirements for NonStop vcpus 1 pinned (dedicated) physical core with a single active hyperthread per core, 32 to 64 GB memory in 1 GB increments backed by 1GB pinned huge pages. 2, 4, or 6 pinned physical cores with a single active hyperthread per core, 64 to 192 GB memory in 1 GB increments backed by 1GB pinned huge pages. vns CPUs in the same system must be deployed in different physical servers for fault-tolerance. Considerations for hyperthreads HPE recommends that physical cores assigned to vns CPUs and vclims reside in the same Nonuniform memory access (NUMA) zone for best performance. The vns CPUs require dedicated cores with hyperthreading enabled. One hyperthread will be used by the vns CPU. The other hyperthread must be kept idle by the host operating system, dedicating it to the vns CPU. The NS vclims require hyperthreading to be enabled, and the vclim uses both hyperthreads. The host operating system must ensure the core is not used for other purposes, dedicating it to the vclim. Memory consideration for Ubuntu Due to a bug in the libvirt-bin package on some versions of Ubuntu 16.04, virtual machines may not launch correctly when configured with more than 92 GB of memory. The libvirt-bin package on Ubuntu needs to be upgraded to at least ubuntu10.12 if a vns CPU VM with larger than 92 GB of memory is required. NonStop Virtualized CLuster I/O Modules (vclims) Virtualized NonStop systems support the IP vclim and Telco vclim and Storage vclim on page 20 which function as offload engines. With vclims, there are no SNMP agents, ilo communications, or firmware to manage. vclims are deployed through the Virtualized NonStop Deployment tools. IP vclim and Telco vclim The IP vclim and Telco vclim provide virtualized NonStop networking and function as networking offload engines with 10 Gigabit Ethernet (10GbE) network interface configurations (NICs) and five customer-configurable Ethernet ports. vclims are deployed using the Virtualized NonStop Deployment Tools. Requirements for vns CPUs 17

18 Illustration is for clarity and is not an actual representation Characteristics of an IP vclim and Telco vclim Runs as a VM in a rack-mount physical server Communicates with the vns CPUs over 40GbE RoCE Provides the internal (maintenance) communication and external (customer) communication Supports up to 5 virtual network interfaces. Each virtual network interface can be: A virtual network interface: virtio_net for OpenStack or vmxnet3 for a VMware environment. A virtual function on a 10GbE NIC, shared with the vclim using Single Root I/O Virtualization (SR- IOV) A physical function on a 10GbE NIC, shared with the vclim using PCI pass-through. The vclim supports the use of VLAN and VXLAN for each of these interfaces. Supports several features found on physical IP and Telco CLIMs such as UDP, TCP, SCTP, and raw sockets over IPv4, IPv6, and IPSec Supports connectivity through Expand-over-IP to HPE Integrity NonStop X and HPE Integrity NonStop i systems Cores are dedicated and isolated to ensure deterministic timing, fault isolation, and performance. Core options, virtual memory, and pinned huge pages are: vclim VMs vclim Hyperthreads Dedicated cores 1 Dedicated memory (1GB huge pages) User option GB Default GB 1 The dedicated cores are insolated in the hypervisor. 18 Supported Virtual Machines (VMs) for vns

19 Supported network interface configuration (NIC) options NIC option PCI passthrough Function for IP vclim and Telco vclim Passes entire NIC to the vclim providing direct hardware access and provides best performance of the NIC options, comparable to a physical CLIM Requires specific supported NICs for the vclim (see Supported NICs for SR-IOV and PCI- Passthrough on page 20.) SR-IOV Passes one virtual function of the NIC to the vclim providing direct hardware access and good performance, comparable to a physical CLIM Multiple VMs can share the NIC through other virtual functions, although multiple VMs compete for NIC throughput Requires specific supported NICs for the vclim (see Supported NICs for SR-IOV and PCI- Passthrough on page 20.) virtio NOTE: vmxnet3 is the virtual network interface in the VMware environment. Sends Ethernet packets through a virtual (virtio_net) device for the vclim; the hypervisor directs these packets to the applicable physical NICs A virtio device for each network interface is provided to the vclim Multiple VMs can share the NIC through multiple virtio_net devices Lets the hypervisor implement Software Defined Networking (SDN) technologies such as VLAN, VXLAN, Open vswitch (OVS), and Distributed Virtual Routing (DVR) between the vclim and the physical network. Provides easier networking management, although performance might not be as strong and some SDN technologies might further reduce performance. Supported network interface configuration (NIC) options 19

20 Supported NICs for SR-IOV and PCI-Passthrough If IP vclim or Telco vclim uses SR-IOV virtualization or PCI passthrough virtualization one of these 10GbE NICs for rack-mount servers is required HPE Ethernet 10Gb 2-port 560SFP+ Adapter HPE Ethernet 10Gb 2-port 560FLR-SFP+ Adapter HPE Ethernet 10Gb 2-port 530T Adapter HPE Ethernet 10Gb 2-port 533FLR-T Adapter HPE Ethernet 10Gb 2-port 530SFP+ Adapter HPE Ethernet 10Gb 2-port 530FLR-SFP+ Adapter NIC interfaces for IP vclim and Telco vclim NIC interface eth0 eth1-eth5 eth6 Function Reserved for manageability networking 10GbE NIC ports (customer-configurable) Reserved for manageability support (maintenance LAN) NOTE: Network manageability requires that at least one of the two vclims (NCLIM000 and NCLIM001) has NIC interface eth6 configured for $ZTCP0 or $ZTCP1. Storage vclim The Storage vclim functions as an offload engine for Storage access. Volume level encryption is supported with an additional license. 20 Supported NICs for SR-IOV and PCI-Passthrough

21 Storage vclim VM Characteristics Runs as a VM in a rack-mount physical server Offers two configuration options: standard and encrypted (encryption requires an additional license) Illustration is for clarity and is not an actual representation Supports ETI-NET vbackbox iscsi Virtual tape, Virtual Storage Appliance (VSA) SAS drives, and storage arrays Supports virtio_blk interfaces to access virtual block I/O devices (drives) Provides up to 25 drives per Storage vclim in OpenStack (24 drives are supported when both iscsi tape and encryption are used) Supports up to 25 drives per Storage vclim in VMware environment. Runs Maintenance Entity Units (MEUs) in first two Storage vclims Cores are dedicated (pinned) and isolated to ensure deterministic timing, fault isolation, and performance. Core options are: 4 cores for standard Storage vclim 8 cores for NSVLE Storage vclim Storage vclim virtio network interfaces Ethernet port eth0 eth1 eth2 Function for Storage vclim Reserved for manageability support Customer-configurable port provides Enterprise Security Key Manager (ESKM) connectivity for NonStop Volume Level Encryption (NSVLE) Customer-configurable port provides iscsi connectivity for virtual tape Storage vclim virtio network interfaces 21

22 Requirements for Storage vclims Storage vclim VM Storage vclim (no encryption) Storage vclim (encrypted) All vclims Resource Requirements for NonStop Storage vclims 8 virtual hyperthreads backed by 4 pinned hyperthreaded physical cores. 4 GB of virtual memory backed by 1GB pinned huge pages. 16 virtual hyperthreads backed by 8 pinned hyperthreaded physical cores. 4 GB of virtual memory backed by 1GB pinned huge pages. vclim failover pairs (primary and backup vclims) in the same system must be deployed in different physical servers for fault tolerance. Simplified Logical Unit Number (LUN) approval NOTE: vns systems in a VMware environment do not support the simplified approval process. To significantly reduce the configuration time for storage devices that are attached to each storage vclim, a simplified approval process is now supported on vns systems running L18.02 or later in an OpenStack cloud. When a storage vclim is deployed, a new configuration file is passed to the storage vclim, containing the serial number to LUN number mapping. LUN ordering occurs in the order the virtual disks are specified when the system is created. $SYSTEM will always be mapped to Once the vns system is deployed, running the lunmgr --approve command with the yesall option instructs lunmgr to read the passed in configuration file and approve all of the LUNs without further user interaction. This command needs to be run once on each storage vclim. Using simplified LUN approval requires the following CLIM images from L CLIM DVD: T0853L03 DBA CLIM QCOW2: T0976L03 DBA LUN Manager commands for virtualized environments NOTE: As of L18.02 and later RVUs, Simplified LUN approval is supported for vns systems running in an OpenStack cloud. vns systems running in a VMware environment do not currently support the simplified approval process.. In a virtualized environment, you use the Logical Unit Number (LUN) Manager to manage virtual block I/O devices and iscsi tape devices. 22 Requirements for Storage vclims

23 The Virtual Block I/O devices have this device type and LUN range. Type 9 devices LUN number range The iscsi Tape devices have this device type and LUN range. Type 3 devices LUN number range 1 32 There are several new and updated LUN Manager commands to support a virtualized environment. NOTE: These commands do not apply to physical NonStop system environments. To differentiate between LUN Manager commands for physical and virtual environments, issue the help command: lunmgr h (--help) For more information on using LUN Manager and its relation to the Storage CLIM, see the NonStop Cluster I/O Protocols (CIP) Configuration and Management Manual. Supported Virtual Machines (VMs) for vns 23

24 New LUN Manager commands (L17.02 and later RVUs) lunmgr t (--addiscscsitape) <ip address> Issues an iscsi Discovery command to the IP address that you input. Once discovery completes, the LUN manager logs in to all available tape devices to establish an iscsi communication session. The LUN manager then assigns the next available LUN number to the tape device and adds it to stomod. This command example shows: Two tape devices (VBACK00 and VBACK01) are discovered IP port and IP address information for the devices Log in attempts and successful logins for the devices lunmgr --deliscsitape <iscsi name> Issues an iscsi logoff command to the iscsi name that you input and closes the iscsi communication. The LUN manager deletes the tape device from stomod. NOTE: To delete a tape device, enter the entire iscsi name. This command example shows: A user deletes the VBACK00 tape device by entering the entire iscsi name and the tape logs off to close the communication: lunmgr --deliscsitape iqn com.etinet:VBACK00 Table Continued 24 Supported Virtual Machines (VMs) for vns

25 New LUN Manager commands (L17.02 and later RVUs) lunmgr v (--printvolname) Prints out the LUN number and both the Primary and Alternate Volume name of each known (approved) Virtual Block I/O device on the CLIM. NOTE: The volume name is only displayed if the volume used an image when it was created or it has been initialized with SCF. lunmgr x (--cleancache) Cleans the LUN and SID caches of old LUNS. Use this command if devices are displayed in the LUN and SID caches but are not seen by the vclim. TIP: Running the --find command after running the --cleancaches command shows there are no longer any devices in the LUN cache and these are also no longer on the vclim. Supported Virtual Machines (VMs) for vns 25

26 Changed LUN Manager commands (L17.02 and later RVUs) lunmgr a (--approve) NOTE: The Yesall parameter is valid in an OpenStack environment running L18.02 or later. It is not valid in a VMware environment. For more information, see Simplified LUN approval. Displays the next Virtual Block I/O device LUN number assignment, the OpenStack Virtual ID, and the NSK primary and alternate volume names (if present) that require approval. Valid user replies are: y (approve) n (do not approve) A LUN number valid for a virtual block device TIP: If you prefer a different LUN number assignment than what the LUN manager provides, you can enter a different number as long as it has not been used before and is within the LUN number range. This command example shows: A new device on a CLIM that was previously assigned LUN manager number A virtual ID that functions like a serial number and is the first 20 characters of the ID that OpenStack assigned to the device. A proposed Static ID (SID) for the device which is unique and that goes on the label of the disk on the master boot record. Table Continued 26 Supported Virtual Machines (VMs) for vns

27 Changed LUN Manager commands (L17.02 and later RVUs) A user must decide whether to assign to Since the device was previously assigned to 10013, the user opts to assign the device to which creates a static address for the device. The user also assigns to 10011, to 10012, and enters Y for each to assign the devices; the devices are then assigned static IDs. lunmgr d (--delete) <LUN> Deletes the input LUN from the device table. The LUN number is not an optional parameter in a virtual environment. This command example shows the deletion of LUNs 10011, 10012, and Table Continued Supported Virtual Machines (VMs) for vns 27

28 Changed LUN Manager commands (L17.02 and later RVUs) lunmgr -f (--find) Provides information about the virtual Block I/O devices. This information includes: Devices known to this vclim. Devices seen (present) by this vclim. Devices that are in the LUN and SID caches but which are not seen by the vclim. If devices are displayed here, run the cleancaches command (lunmgr cleancaches). This command example shows: Four LUNs: 10007, 10010, 10012, and with stable addresses. UnderDevices that are no longer present but in cache there are LUNs listed. These are LUNs that were once assigned to the CLIM that are no longer available to the CLIM and which could be freed up by issuing the cleancaches command). This command example shows the LUNs for iscsi tape devices (Type 3). lunmgr h (--help) Displays a list of valid commands and the effects of these commands. The help command has been changed to reflect the changes and additions to commands valid for the virtualized environment. The help also describes commands for the physical environment. 28 Supported Virtual Machines (VMs) for vns

29 Managing Virtualized NonStop (vns) The topics in this chapter describe the vns management tools including the tasks and requirements associated with these tools. NOTE: For VMware environments, see Installing Virtualized NonStop software and tools on VMware. Deployment management tools Virtualized NonStop Deployment Tools on page 29 Fault zone isolation options for vns on page 31 Flavor management for Virtualized NonStop virtual machines on page 32 Virtualized NonStop Deployment Tools NOTE: For VMware environments, see Installing Virtualized NonStop software and tools on VMware. In an OpenStack environment, the Virtualized NonStop Deployment Tools use an OpenStack control plane which provides the foundation for all deployment operations including: Virtualized NonStop RESTful API that communicates with: Horizon plugin that is accessed via a web browser OpenStack command line interface (CLI) Figure 4: Virtualized NonStop OpenStack control plane Managing Virtualized NonStop (vns) 29

30 Features of Virtualized NonStop Deployment Tools Horizon plugin and OpenStack CLI interface for system management. Horizon interface for vns lets administrators perform several actions including expanding, removing, and adding system resources as well as other actions. vns OpenStack CLI commands provides access from the command line to the vns APIs and other functionality. Supports deployment in VMware vsphere 6.5 and later environment using vrealize Orchestrator workflows. For more information, see Installing Virtualized NonStop software and tools on VMware. Relation of Virtualized NonStop Deployment Tools to OpenStack Services NOTE: This topic does not apply to NonStop Deployment Tools in a VMware environment. For VMware environments, see Installing Virtualized NonStop software and tools on VMware. Whenever possible, the NonStop Deployment Tools simplify operations and let OpenStack services do the work for you. No authentication is done by the NonStop Deployment Tools CLI authentication is handled by Keystone using an OpenStack Python client configuration package Tokens that are returned are used with future requests Token management is handled by several Keystone Python packages RESTFUL APIs provide communication with other services Keystone keeps a service catalog Service catalog looks up the RESTful endpoint Request is sent to that endpoint Figure 5 shows the NonStop Deployment Tools in relation to OpenStack Services. 30 Features of Virtualized NonStop Deployment Tools

31 Figure 5: Virtualized NonStop Deployment Tools in OpenStack Fault zone isolation options for vns Fault zone isolation options let you select what constraints are placed on the placement of virtual resources when creating the system with the Virtualized NonStop Deployment Tools. NOTE: For VMware, see Fault isolation levels for vns on VMware. Fault zone isolation options for vns 31

32 Options NonStop Standard This is the only supported option for production and disaster recovery configurations. Requires that: An administrator configures 2 separate OpenStack availability zones (an OpenStack availability zone is a group of compute nodes) You select different availability zones (First Availability Zone and Second Availability Zone) during deployment Guarantees: No two vns CPUs run in the same compute node (odd CPUs are in one availability zone; even CPUs are in another) The primary and backup vclims in a failover pair do not run in the same host IP and Telco vclims are evenly split into two availability zones Storage vclims are split into the two availability zones, based on the disks that are attached Guarantees CPU virtual machines for this system will not run in the same host One half of the mirrored volume will be provisioned from the first availability zone and the other half will be provisioned from the second availability zone CPUs Only No limitations on vclims or disks; however, the CPUs Only option is not fault-tolerant with respect to vclims and disks (Optional) Can have one OpenStack availability zone: First Availability Zone None Allows OpenStack to provision resources wherever they fit (Optional) Can have one OpenStack availability zone: First Availability Zone The None option is not fault-tolerant Flavor management for Virtualized NonStop virtual machines IMPORTANT: Until the administrator grants permission to a user, only the administrator (by default) can manage flavors for NonStop virtual machines (VMs). CPU flavor selections determine the number of cores and memory reserved for each vns VM in the vns system. Because vns CPUs and vclims (Storage, IP, and Telco) require different flavors for VMs, the Horizon interface lets a user select: 32 Flavor management for Virtualized NonStop virtual machines

33 Core and memory size for vns CPUs Cores for Storage, IP, and Telco vclims SRIOV or PCI passthrough alias for the RoCE NIC must be specified for each type For procedure details, see Installing Virtualized NonStop software and tools on OpenStack on page 43. Managing Virtualized NonStop (vns) 33

34 Planning tasks for a Virtualized NonStop system in an OpenStack private cloud Mandatory prerequisites for a vns system in an OpenStack private cloud NOTE: These prerequisites do not apply to a VMware environment. Verify that you have the items in this checklist. Several procedures and dialogs will require these. Record the information that you gather (such as Expand Node Number) to use during subsequent procedures. Verify that you have 1. vns system components planned and unique identifiers for some components The vns System Name (can be alphanumeric and up to 6 letters without leading \ ) The Expand Node Number which must be: Unique in any Expand network that the node will participate on, including a RoCE cluster Unique between all vns systems in the cloud The number of vclims being deployed by vclim type, including IP addresses for these vclims, and network assignments for each interface on the vclims NSK volume names and sizes to be connected to Storage vclims and primary and mirror choices for each volume The VSA back-end has a unique name and is registered with Cinder and configured by the OpenStack Administrator in cinder.conf: [vsa-1] hplefthand_password: hpnonstop hplefthand_clustername: cluster-vsa1 hplefthand_api_url: hplefthand_username: vsaroot hplefthand_iscsi_chap_enabled: true volume_backend_name: vsabackend1 volume_driver: cinder.volume.drivers.san.hp.hp_lefthand_iscsi.hplefthandiscsidriver hplefthand_debug: false 2. Software and licenses ready DVDs for the SUT software, Independent Products, vclim, HSS, and NSC CLIM and HSS ISO image versions (L17.02 or later) Table Continued 34 Planning tasks for a Virtualized NonStop system in an OpenStack private cloud

35 Verify that you have License file that supports the vns CPUs or vclims for the intended configuration. In addition, you require the license keys for those optional products which require license keys. 3. Planned the network and set up manageability support The name of the provider networks registered with the OpenStack OVS agent or the OpenStack Administrator. Typically these networks are configured on each compute node in the ml2_conf.ini file. For example: [ml2_type_vlan] network_vlan_ranges = opsnet1,extneta,extnetb When using ConnectX-4 cards, the X and Y fabric networks need to be registered as well. For example: [ml2_type_vlan] network_vlan_ranges = opsnet1,extneta,extnetb,xfabric,yfabric Created the Virtualized NonStop external customer networks by following the procedures in the OpenStack administration guides while adhering to the considerations mentioned in this guide. IP addresses for $ZTCP0 and $ZTCP1 NonStop Maintenance LAN TCP/IP stacks used for SSH and SSL. HPE recommends using the default LAN IP addresses used on NonStop X systems as well as using the other default IP address already used on those systems. Created the IPv4 Virtualized Maintenance LAN for your new vns system that is available to your project in OpenStack. For example:../admin-openrc neutron net-create vnonstop_maintenance_lan neutron net-show vnonstop_maintenance_lan neutron subnet-create <netid> /16 --enable-dhcp --no-gateway \ name vnonstop_maintenance_subnet NOTE: In the previous example, the net-show command provides the maximum transmission unit (MTU) result. Record the MTU result for later use. For more information, see Reviewing maximum transmission unit (MTU) in OpenStack on page 97. Created the Virtualized NonStop Operations LAN which must be an external network that a physical Windows console can access for vnsc, OSM, and TACL. For example:../admin-openrc neutron net-create "vnonstop_operations_lan \ --shared \ --provider:network-type vlan \ --provider:physical_network physnet1 \ --provider:segmentation_id 102 neutron subnet-create <netid> /16 -enable-dhcp --no-gateway \ name vnonstop_operations_subnet Table Continued Planning tasks for a Virtualized NonStop system in an OpenStack private cloud 35

36 Verify that you have When using ConnectX-4 cards, create the X and Y fabric networks. For example: neutron net-create "vns_x_fabric" \ --shared \ --provider:network-type flat \ --provider:physical_network xfabric neutron subnet-create <netid> /24 \ --disable-dhcp \ --no-gateway \ --name "vns_x_fabric_subnet" neutron net-create "vns_y_fabric" \ --shared \ --provider:network-type flat \ --provider:physical_network yfabric neutron subnet-create <netid> /24 \ --disable-dhcp \ --no-gateway \ --name "vns_y_fabric_subnet" 4. Proper NIC configurations for RoCE and vclims (IP and Telco) The name of the 560-series or 530-series physical device(s) registered with OpenStack (optional, but recommended) TIP: These items will have been configured by the OpenStack Administrator on each compute node in the ml2_conf_sriov_agent.ini file. For example: [sriov_nic] physical_device_mappings = extneta:hed1,extnetb:hed2 exclude_devices = The NIC configuration on each compute node has been verified by the OpenStack administrator. When using ConnectX-3 cards, the RoCE NIC with RoCEv2 is enabled. For example: [root@roce100g01 slot_02]# cat /sys/module/mlx4_core/parameters/roce_mode 2 Where the value of 2 in the above cat file denotes RoCEv2. When using ConnectX-3 cards, the RoCE NIC with SR-IOV is enabled (virtual functions are listed). At most 4 virtual functions are allowed. For example: ~# lspci grep Mell 87:00.0 Network controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro] 87:00.1 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function] 87:00.2 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function] 87:00.3 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function] 87:00.4 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function] When using ConnectX-4 cards, the ConnectX-4 firmware configured for Ethernet use, SR-IOV, and with the correct number of MSIX vectors. For example: mlxconfig -d <ConnectX-4 PCI ID> set NUM_VF_MSIX=30 LINK_TYPE_P1=ETH LINK_TYPE_P2=ETH SRIOV_EN=True NUM_OF_VFS=4 Table Continued 36 Planning tasks for a Virtualized NonStop system in an OpenStack private cloud

37 Verify that you have When using ConnectX-4 cards, the Host OS needs to be configured to allocate virtual functions for the ConnectX-4 cards. The maximum number of virtual functions per port supported for Virtualized NonStop is 4. These steps will need to be added to a startup file, as the configuration does not persist across host reboots. For example: echo 4 > /sys/class/net/ens1f0/device/sriov_numvfs echo 4 > /sys/class/net/ens1f1/device/sriov_numvfs When using ConnectX-4 cards, the neutron configuration on each compute node needs to be updated to register the ConnectX-4 cards as available for SR-IOV. This configuration will be done by the OpenStack Administrator in the ml2_conf_sriov_agent.ini file. For example: [sriov_nic] physical_device_mappings = extneta:hed1,extnetb:hed2,xfabric:ens1f0,yfabric:ens1f1 When using ConnectX-4 cards, the nova configuration on each compute node needs to be updated to whitelist the ConnectX-4 cards for SR-IOV. This configuration will be done by the OpenStack Administrator in the nova.conf file. For example: [DEFAULT] pci_passthrough_whitelist=[{"devname": "ens1f0", "physical_network": "xfabric"}, {"devname": "ens1f1", "physical_network": "yfabric"}] The NIC with virtual functions enabled (partial example shown): 09:00.0 Ethernet controller: Intel Corporation Gigabit Dual Port Backplane Connection (rev 01) A NIC with connectivity at 10GbE speed (or the desired speed). For example: root@comp004:~# ethtool hed1 grep Speed Speed: 10000Mb/s root@comp004:~# ethtool hed2 grep Speed Speed: 10000Mb/s The RoCE NIC with link alive at 40GbE, on both ports. For example: root@comp004:~# ethtool hed5 grep Speed Speed: 40000Mb/s root@comp004:~# ethtool hed6 grep Speed Speed: 40000Mb/s 5. Planned fault isolation Reviewed Fault zone isolation options for vns on page 31 and selected an option. 1 GB huge pages enabled for vclims with: 16GB for IP vclim or Telco vclim 4GB for Storage vclim these pages evenly spread between available NUMA zones Table Continued Planning tasks for a Virtualized NonStop system in an OpenStack private cloud 37

38 Verify that you have Reviewed the huge pages on each compute node (this example assumes you have a static workload during deployment): root@comp004:~# cat /sys/devices/system/node/node*/meminfo grep Huge Node 0 HugePages_Total: 123 Node 0 HugePages_Free: 67 Node 1 HugePages_Total: 125 Node 1 HugePages_Free: 81 The vclims have compute nodes with isolated hyperthreads. For example: root@comp004:~$ cat /proc/cmdline BOOT_IMAGE=/vmlinuz amd64-hpelinux root=/dev/mapper/hlm--vg-root ro crashkernel=384m-2g: 64M,2G-:256M hugepagesz=1g hugepages=224 default_hugepagesz=1g isolcpus=1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,25,26,27,28,29,30,31,32,33,3 4,35,36,37,38,39,40,41,42,43,44,45,46,47 intel_iommu=on iommu=pt quiet Isolated the cores and hyperthreads and created a chart of available cores and hyperthreads that is guided by the CPU architecture. This checklist item determines which cores and hyperthreads are already occupied on the compute nodes. Use the unoccupied cores and hyperthreads on the compute nodes to determine if the resources in the cloud are sufficient to provision the vns system. The next two tasks provide the available core and hyperthread capacity. For example: root@comp004:~$ sudo lscpu grep NUMA NUMA node(s): 2 NUMA node0 CPU(s): 0-11,24-35 NUMA node1 CPU(s): 12-23,36-47 You have obtained the list of VMs. This assumes you have a static workload. For example: root@comp004:~$ sudo virsh list --all Id Name State instance f running 3 instance running 4 instance running 6 instance e running Table Continued 38 Planning tasks for a Virtualized NonStop system in an OpenStack private cloud

39 Verify that you have For each VM, you have marked the used cells in the CPU chart and associated hyperthreads (if not explicitly listed). This shows a 4 core, 8 hyperthreaded vclim and assumes a static workload: root@comp0004:~$ sudo virsh dumpxml instance grep cpupin <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='25'/> <vcpupin vcpu='2' cpuset='9'/> <vcpupin vcpu='3' cpuset='33'/> <vcpupin vcpu='4' cpuset='16'/> <vcpupin vcpu='5' cpuset='40'/> <vcpupin vcpu='6' cpuset='13'/> <vcpupin vcpu='7' cpuset='37'/> Repeat the previous step for each VM and complete this checklist, then proceed to the next chapter. Planning tasks for a Virtualized NonStop system in an OpenStack private cloud 39

40 Configuring a Virtualized NonStop system for deployment These topics describe the prerequisites or steps required to deploy vns and should be followed in this order. Procedure 1. Configuring the Host OS on the compute nodes (ConnectX-4) 2. Configuring OpenStack (ConnectX-4) Configuring the Host OS on the compute nodes (ConnectX-4) Prerequisites NOTE: For the compute nodes in a VMware environment, ensure that you have completed the Predeployment hardware tasks for VMware environment. You must configure the compute nodes to support SRIOV passthrough, Global Pause on the RoCE NICs, huge page memory configuration, and CPU isolation to support the vns system. You must perform these steps on each compute node. You may want to check your OFED driver and HCA firmware information for the RVU. See Supported OFED drivers and HCA firmware by RVU. Procedure 1. Install the OFED drivers version for your Host OS using this download link: 2. Follow the firmware package instructions for downloading and installing the latest firmware (minimum or later) on each ConnectX-4 NIC. 3. Set the virtual MSIX vectors to 30 using the mlxconfig tool (there is no BIOS setting). mlxconfig -d /dev/mst/mt4115_pciconf0 set NUM_VF_MSIX=30 4. Reboot the compute node. 5. Set the virtual functions (VFs) on the ConnectX-4 card. VFs need to be set to the same number allocated in the OS. NOTE: Typically, VFs are set to 4 which is the maximum supported. 6. Using sysfs on Linux, configure the VFs by echoing into the /sys/class/net/<interface>/ device/sriov_numvfs file. 40 Configuring a Virtualized NonStop system for deployment

41 TIP: In the sysfs tree, if you cat sriov_num, it lists the number of currently allocated VFs in the OS. If you write to sriov_num by echoing and redirecting into the file, that changes the number of VFs allocated in the OS. In the following example, 4 is echoed into both files and each of the ports on the cards have 4 VFs. As a best practice, ensure that a startup script has the VF configuration information. echo 4 > /sys/class/net/ens1f0/device/sriov_numvfs echo 4 > /sys/class/net/ens1f1/device/sriov_numvfs 7. Add these flags to the default command line arguments in the grub configuration file to allocate memory for the VMs that will be deployed in the compute node. In the following argument, num is the number of 1GB pages to pre-allocate. Make sure you leave sufficient unallocated memory on the host to support the overhead of the Host OS and the KVM. hugepagesz=1g hugepages=<num> transparent_hugepage=never 8. Any CPU cores that will be assigned to VMs must be isolated so that the Host OS does not use them. Add a flag such as the following to the default command line arguments in the grub configuration file. Note that the value depends on the number of cores and hyperthreads available from the CPU architecture. isolcpus=<1-9,11-19,21-29,31-39> 9. Run update-grub to update the grub loader and reboot the node. Configuring OpenStack (ConnectX-4) Prerequisites You must perform some OpenStack configuration to support deployment of a Virtualized NonStop system into the OpenStack cloud. This procedure does not apply to VMware. As of L18.02 and later RVUs, new vns systems use ConnectX-4 NICs. NOTE: ConnectX-4 and ConnectX-3 Host Channel Adapters (HCAs) cannot coexist in the same compute node. Procedure 1. On each compute node, edit the pci_passthrough_whitelist parameter in the nova.conf file located in /etc/nova/nova.conf with the interface name and physical network name. The following example shows setting the pci_passthrough_whitelist to associate an interface (enslf0) with the X fabric and an interface (enslf1) with the Y fabric. Where enslf0 is slot 1, function 0 which is port 1 on the slot 1 NIC Configuring OpenStack (ConnectX-4) 41

42 Where enslf1 is slot 1, function 1 which is port 2 on the slot 1 NIC pci_passthrough_whitelist = [{"devname": "ens1f0", "physical_network": "xfabric"}, {"devname": "ens1f1", "physical_network": "yfabric"}] 2. In Neutron, run the SRIOV agent on each compute node to configure the physical device mapping between physical network names and interfaces and to allocate the devices. The following example shows the allocation of the same devices used in the previous step. [sriov_nic] physical_device_mappings = xfabric:ens1f0,yfabric:ens1f1 3. On each node in the control plane, add a PciPassthroughFilter parameter to the scheduler_default_filters list in the nova.conf file as shown in the following example. scheduler_default_filters = AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, NUMATopologyFilter, PciPassthroughFilter, RetryFilter 4. Perform the following substeps in Neutron. a. Configure the ML2 plugin to enable SRIOV. b. Add sriovnicswitch to the list of mechanism drivers. mechanism_drivers = openvswitch,l2population,sriovnicswitch c. To allow them to be configured as networks in Neutron, add the physical network names to flat_networks. flat_networks = provider,xfabric,yfabric d. Create the actual X fabric and Y fabric networks in Neutron. Each fabric requires a separate network. Ensure the provider-physical-network value matches the value configured in the Neutron configuration files. openstack network create --share --provider-network-type flat --provider-physical-network xfabric xfabric-net openstack network create --share --provider-network-type flat --provider-physical-network yfabric yfabric-net e. Create the subnets for each fabric network in Neutron. Each fabric network requires a separate subnet. f. Specify an IP address range for the network. Even though the IP address range is not enforced for the SRIOV or PCI passthrough ports allocated on the network, the IP address is required. g. For each subnet, select a different network range that is private and not used. openstack subnet create --network xfabric-net --subnet-range /24 xfabric-subnet openstack subnet create --network yfabric-net --subnet-range /24 yfabric-subnet 42 Configuring a Virtualized NonStop system for deployment

43 Installing Virtualized NonStop software and tools on OpenStack NOTE: For deploying in a VMware environment, see Installing Virtualized NonStop software and tools on VMware. To deploy vns in an OpenStack environment, follow these steps or prerequisites. Procedure 1. Obtaining Virtualized NonStop software for OpenStack on page Sourcing in an OpenStack resource file on page Depending on your deployment OpenStack environment, select one of the following. Installing Virtualized NonStop Deployment Tools on Ubuntu on page 44 Installing Virtualized NonStop Deployment Tools on Red Hat OpenStack Platform (RHOSP) 4. Importing Virtualized NonStop images into Glance (OpenStack) on page Creating the Virtualized NonStop system flavors on page Create the vnsc on page Mandatory prerequisites for a vns system in an OpenStack private cloud on page 34 Obtaining Virtualized NonStop software for OpenStack The vns software for OpenStack is obtained from HPE electronically or via DVD. Sourcing in an OpenStack resource file Prerequisites If OpenStack is installed, you can retrieve a resource file for your OpenStack user file using the Horizon dashboard. Procedure 1. Log into your domain as your OpenStack user. 2. Select the Virtualized NonStop project name, and navigate to Compute->Access & Security on the left-hand pane. 3. In the tabs shown, select API access, click the button to download the appropriate version of OpenStack RC file for the OpenStack user account. 4. Move this file to the Linux controller node in your Linux user account home directory to make it easy to locate. 5. Depending on how you name your resource file, you will issue a "source" or "." command such as: $ source <my-openstack-user-name>-openrc Installing Virtualized NonStop software and tools on OpenStack 43

44 Here is an example using ".": $. admin-openrc Installing Virtualized NonStop Deployment Tools on Ubuntu NOTE: If you are installing the Virtualized NonStop Deployment Tools on RedHat OpenStack Platform (RHOSP), see Installing Virtualized NonStop Deployment Tools on Red Hat OpenStack Platform (RHOSP) If you are installing the Virtualized NonStop Deployment Tools on VMware, see Installing Virtualized NonStop software and tools on VMware. This procedure assumes that you are root on the controller or have sudo permissions. OpenStack commands shown here must be run as an administrator and assume the appropriate OpenStack resource file has been sourced for an administrative user. Procedure 1. Download the HPE Virtualized NonStop Deployment Tools from Scout. This download will include a file named vnonstop-openstack-<x.y.z>.tar.gz where <x.y.z> is a version string. 2. Copy the package to the control node. 3. Untar/gzip the package: tar -pzvxf vnonstop-openstack-x.y.z.tar.gz 4. Change directories into the untar'd package: cd vnonstop-openstack-x.y.z/ 5. Run the install script. If you are running as the root user:./install.sh If you are not running as the root user: sudo./install.sh 6. Verify the installed version: $ vnonstop --version 7. Create the MySQL database for the vns service, and grant privileges to the vns user of MySQL on local and remote hosts. MySQL should have been installed and started at an early stage of OpenStack deployment, during installation of the KeyStone identity service. TIP: MariaDB is compatible with MySQL and can be used for the Virtualized NonStop database. When using MySQL, the banner and prompt will differ from what is shown in the example below. Ensure that you replace PASSWORD in the example with a suitable password. This password will be required later to configure the vns service in OpenStack. 44 Installing Virtualized NonStop Deployment Tools on Ubuntu

45 Example 1 Creating a Maria DB $ mysql -u root -p Enter password: Welcome to the MariaDB monitor MariaDB [(none)]> create database vnonstop;... MariaDB [(none)]> grant all privileges on vnonstop.* to 'vnonstop'@'localhost' identified by ''<PASSWORD>'';... MariaDB [(none)]> grant all privileges on vnonstop.* to 'vnonstop'@'%' identified by ''<PASSWORD>'';... MariaDB [(none)]> exit Bye 8. Make sure you source in the resource file for the OpenStack user with admin rights before performing the series of commands in the example. This example displays the sequence of commands for creating the vns service, user, role, and endpoints in OpenStack. Note that you need to provide information for <service project>, <RegionName>, and <host or IP> as described in the Role, region, and host/ip table. Example 2 Creating vns service, user, role and endpoints in OpenStack $ openstack service create vlicense --name "vnonstop" --description "Virtualized NonStop API service" $ openstack user create vnonstop --password-prompt $ openstack role add --user vnonstop --project <service project> admin $ openstack endpoint create --region <RegionName> vlicense admin or IP>:9990/v1 $ openstack endpoint create --region <RegionName> vlicense internal or IP>:9990/v1 $ openstack endpoint create --region <RegionName> vlicense public or IP>:9990/v1 Role, region, and host/ip <service project> <host or IP> <RegionName> Replace with your OpenStack project in which other OpenStack services such as Nova were created Hostname or IP address of the controller OpenStack region name (Only necessary if a multi-region OpenStack setup is being used). TIP: If you have problems finding the correct <service project> name or <RegionName>, review the output of these commands: $ openstack project list --long $ openstack region list 9. Create a configuration file called: /etc/vnonstop/vnonstop.conf Example 3 vns configuration file shows a full configuration file. Example 3 vns configuration file [DEFAULT] log_file=/var/log/vnonstop/vnonstop-api.log [database] host = localhost Installing Virtualized NonStop software and tools on OpenStack 45

46 user = vnonstop password = password name = vnonstop [keystone_authtoken] auth_uri = auth_url = memcached_servers = localhost:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = vnonstop password = password 10. Open the configuration file with an editor to set or add the entries described in this table. Edit this section of file [DEFAULT] With this setting Set the log file. Recommended: /var/log/ vnonstop/vnonstop-api.log If a different IP port than 9990 is desired, add a line port=<port #> [database] Set the host for the database connection. Current default is localhost. Set the database name for the database connection. Current default is vnonstop. Set the user for the database connection. Current default is vnonstop. Set the password for the database connection. No default. [keystone_authtoken] Based on the Keystone configuration in your cloud, set the keystone_authtoken for the vns user previously created. 11. Restart the vns-api service by issuing: restart vnonstop-api service 12. Complete the procedures for deploying vns in an OpenStack environment. a. Importing Virtualized NonStop images into Glance (OpenStack) on page 52 b. Creating the Virtualized NonStop system flavors on page 53 c. Create the vnsc on page 55 d. Mandatory prerequisites for a vns system in an OpenStack private cloud on page Installing Virtualized NonStop software and tools on OpenStack

47 Installing Virtualized NonStop Deployment Tools on Red Hat OpenStack Platform (RHOSP) The installation steps for installing the deployment tools on RHOSP are similar to the installation steps for Ubuntu. RHOSP has additional steps for the High-Availability (HA) control plane. To install the Deployment Tools on RHOSP, perform the following procedures. 1. Creating OpenStack service and user on RHOSP 2. Creating MySQL database on RHOSP 3. Installing PIP for RHOSP 4. Configuring control node configuration on RedHat Open Stack platform Creating OpenStack service and user on RHOSP From the undercloud node, source the overcloud credentials file (overcloudrc or <cloudname>rc). Run the following commands to create the service and user for the vns API service. Procedure 1. openstack service create vlicense --name "vnonstop" --description "Virtualized NonStop API service" 2. Get the data for the endponts by running: openstack endpoint show nova 3. Record the fields for region,adminurl, internalurl, and publicurl and record the IP addresses for each value. 4. Add the endpoints for the vns API service by issuing: openstack endpoint create --region <region> --adminurl --internalurl --publicurl vlicense 5. Create the user for the vns API service. a. openstack user create vnonstop --password-prompt b. openstack role add --user vnonstop --project service admin 6. Proceed to Creating MySQL database on RHOSP. Creating MySQL database on RHOSP To create the MySQL database for use by the vns API service, run the following commands on only one of the control nodes (not on the director node). The database is synchronized across all the nodes automatically, so it is not necessary to run this on each control node. Procedure 1. sudo mysql 2. create database vnonstop; Installing Virtualized NonStop Deployment Tools on Red Hat OpenStack Platform (RHOSP) 47

48 3. grant all privileges on vnonstop.* to identified by 'hpnonstop'; 4. grant all privileges on vnonstop.* to identified by 'hpnonstop'; 5. exit 6. Proceed to Configuring control node configuration on RedHat Open Stack platform. Installing PIP for RHOSP Procedure 1. Pip is not available in the default yum repositories for RHOSP. You must install PIP by following the instructions at: 2. Proceed to Configuring control node configuration on RedHat Open Stack platform. Configuring control node configuration on RedHat Open Stack platform All of the commands in this checklist require root access. Once you SSH into the controller, run sudo -s to get a root prompt. 48 Installing PIP for RHOSP

49 You must run these steps on each of the control nodes directly Configure the HAProxy service 1. Edit the /etc/haproxy/haproxy.cfg file 2. Copy the listen cinder block to the end of the file 3. Change listen cinder to listen vnsapi 4. Change the existing port number, 8776, to 9990 in all bind and server lines for the new listen vnsapi block 5. Check the server lines and record the IP address associated with the controller you are logged onto. For example, if you are logged on to control-0, these are the server lines. Record the IP address for later. You will need this address for configuring the vns API service: server vrhdev-control-0.internalapi.localdomain :9990 check fall 5 inter 2000 rise 2 server vrhdev-control-1.internalapi.localdomain :9990 check fall 5 inter 2000 rise 2 server vrhdev-control-2.internalapi.localdomain :9990 check fall 5 inter 2000 rise 2 6. Restart the HAProxy service by issuing: systemctl restart haproxy.service NOTE: An error message may display stating proxy vnsapi has no server available! Ignore this message as the server will be started later. Configure IP Tables Table Continued Installing Virtualized NonStop software and tools on OpenStack 49

50 You must run these steps on each of the control nodes directly 1. Determine where to insert the new iptables by issuing the following command: iptables --list INPUT --line-numbers 2. Scan the output and record the line number of the first REJECT or DROP line whichever comes first. For example, in the following output, line 76 is the line number to record. This is used in the next step to make sure the new rule is inserted before the REJECT or DROP line. Chain INPUT (policy ACCEPT) num target prot opt source destination 1 nova-api-input all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh 76 REJECT all -- anywhere anywhere reject-with icmp-hostprohibited 77 LOG all -- anywhere anywhere /* 998 log all */ LOG level warning 78 DROP all -- anywhere anywhere /* 999 drop all */ state NEW Add the vns API service to iptables right before the first REJECT or DROP line. iptables -I INPUT <line number> -p tcp -m multiport --dports m comment --comment "vns_api_server" -m state --state NEW -j ACCEPT 3. Save the iptables rules to persist across reboot: iptables-save > /etc/sysconfig/iptables Install the vns Deployment Tools 1. Untar the vnonstop-openstack-<version>.tar.gz file 2. Change directories into the vnonstop-openstack-<version>.tar.gz folder 3. Run./install.sh Create the configuration file for the vns Deployment Tools Table Continued 50 Installing Virtualized NonStop software and tools on OpenStack

51 You must run these steps on each of the control nodes directly 1. Edit the /etc/vnonstop/vnonstop.conf file as follows: a. Create a [DEFAULT] section and add the following. For the IP address for the host, use the IP address you recorded when configuring the HAProxy service above. The same configuration file can be used on all controller nodes, just updating the host value to the correct IP address. host=<ip address> log_file=/var/log/vnonstop/vnonstop-api.log b. Update the [database] section with the information for connecting to MySQL as follows: [database] host = <virtual IP> user = <MySQL user created above> password = <MySQL password created above> name = <MySQL database created above> NOTE: The IP address of the host should be the virtual IP for the control plane. To verify, check the bind line of the mysql section in the /etc/haproxy/haproxy.cfg file on any of the control nodes. c. Update the [keystone_authtoken] section with the information for connecting to OpenStack using the vnonstop user you created earlier. [keystone_authtoken] username = vnonstop password = <password> project_name = service memcached_servers = <IP control-0>:11211,<ip control-1>:11211,... auth_type = password auth_url = auth_uri= Considerations I. The memcached_servers lines should contain a comma separated list of IP port entries; one for each control node with port II. III. The admin IP is the IP address the keystone endpoint is listening on for the adminurl. This can be seen from the openstack endpoint show keystone command and by looking at the IP address in the adminurl field. The VIP is the virtual IP address the keystone endpoint is listening on for the internalurl. This can be seen from the openstack endpoint show keystone command and by looking at the IP address in the internalurl field. Start the vns API service The configuration is complete. Start the vns API service by using systemctl: Installing Virtualized NonStop software and tools on OpenStack 51

52 You must run these steps on each of the control nodes directly systemctl enable vnonstop-api.service systemctl start vnonstop-api.service Importing Virtualized NonStop images into Glance (OpenStack) NOTE: The core license file will also need to be installed on the vns system after NSK is running, using the Install Core License guided procedure found on the system object in the OSM Service Connection. vns component Initial delivery Updates SUT QCOW2 image BACKUP format Core license file File File vclim software QCOW2 image Installer vnsc software ISO image Installer Halted State Services (HSS) initial boot OS ISO image ISO image Independent Products Various formats Various formats Software Product revisions (SPRs) - Files vns deployment tools Zipped file collection Zipped file collection Once the images have been acquired from Scout, use the openstack image create command to upload the images into Glance. The images may be added to the admin project and made public so that the users of the Virtualized NonStop project or projects have access to them, or they may be added directly to the Virtualized NonStop project(s). Example 4 Uploading an image into Glance root@comp004:~$../admin-openrc root@comp004:~$ glance image-create --name $name --file../imgs/$name \ --container-format bare --disk-format $format --visibility public Example 5 Verifying an image (QCOW) root@comp004:~$ sha256sum T0976L03_15FEB2017_11JAN2017_L03.qcow2 6167dbd47f6e9b59a37f948a113a227cb9bd704440d9ef0b6c12c8d77c01b48b T0976L03_15FEB2017_11JAN2017_L03.qcow2 root@comp004:~$ cat T0976L03_15FEB2017_11JAN2017_L03.sha dbd47f6e9b59a37f948a113a227cb9bd704440d9ef0b6c12c8d77c01b48b T0976L03_15FEB2017_11JAN2017_L03.qcow2 Prerequisites You must acquire the images for the vns components and save them to either: 52 Importing Virtualized NonStop images into Glance (OpenStack)

53 Node where you installed the Virtualized NonStop Deployment tools, so you can use the commandline tools to import the images The system that will be used to access Horizon and upload the images Creating the Virtualized NonStop system flavors IMPORTANT: Until the administrator grants permission to a user, only the administrator (by default) can manage flavors for NonStop virtual machines (VMs). NOTE: Before a vns system can be deployed, you must create flavors for a minimum of 1 vns CPU, 1 Storage vclim, and 1 IP vclim or 1 Telco vclim. For an overview of creating flavors, see Flavor management for Virtualized NonStop virtual machines on page 32 Procedure 1. Log on to Horizon. 2. Using the Virtualized NonStop deployment tool tab, select Admin->NonStop->Flavors. The flavors panel displays. If this is the first time you are adding a flavor, the panel will be empty. The following example shows a flavor panel with flavors. 3. Click +Create Flavor. The Create Virtualized NonStop Flavor dialog appears. Creating the Virtualized NonStop system flavors 53

54 4. Click Next. The Flavor Information dialog box appears. a. Enter a name in Flavor Name. b. Using the Core Count drop-down menu, select a Core Count that is compatible with your license(s) and system model (entry-class or high-end). c. Enter a random access memory (RAM) size that is compatible with your license(s) and system model (entry-class or high-end) in the RAM Size in Gigabytes. 54 Installing Virtualized NonStop software and tools on OpenStack

55 d. If you want to split the cores and memory between Numa nodes, select the Split the Cores and Memory between Numa nodes checkbox. If the checkbox is not selected, all cores and memory are allocated from the same Numa node as the PCI card for the fabric. e. If the RoCE NICs are ConnectX-3, select Use PCI Alias for this flavor (do not select for ConnectX-4 NICs and do not perform the instructions in this substep if the NICs are ConnectX-4). Enter a string based on the pci_alias field values for the RoCE NIC PCI interface. The Fabric PCI alias uses either VF for SRI-IOV or PF for PCI passthrough. The PCI alias information is contained in the /etc/nova/nova.conf of the OpenStack control node. It can also be obtained by logging onto the control node with SSH and issuing this command: ~$ sudo cat /etc/nova/nova.conf grep pci_alias Example result: pci_alias={"name":"mellanox_vf", "product_id":"1004", "vendor_id":"15b3", "device_type":"type-vf"} In the example above, "Mellanox_VF" is the alias name for the SR-IOV virtual function of a RoCE ConnectX-3 NIC. A new CPU flavor for this alias name could be Mellanox_VF so that the Flavor will help select a Nova virtual machine location requiring one (:1) SR-IOV virtual function on a Mellanox PCI interface. 5. If you have finished your selections, click Create Flavor. 6. Repeat this procedure to create your other vns flavors. Create the vnsc The vns system requires a 2012 or 2016 Windows Server VM running as a vnsc. If you do not have a windows image for use, see Creating a Virtualized NonStop System console (vnsc) on page 109 or Creating a Virtualized NonStop System (vnsc) for VMware. Create the vnsc 55

56 Installing Virtualized NonStop software and tools on VMware Prerequisites To deploy vns in a VMware environment, see the following topics. Procedure 1. Verify that the requirements for vns on VMware are met. 2. Predeployment hardware tasks for VMware environment. 3. Fault isolation levels for vns on VMware. See also Failover support for vns network interface cards (NICs). 4. Install the vns VMware package and set up Orchestrator VM tagging. 5. Plan and create the vns system for a VMware environment. See also: Supported attributes for JSON specification file for VMware. Managing vns tasks on VMware 6. Creating a Virtualized NonStop System (vnsc) for VMware Obtaining Virtualized NonStop software for VMware The vns software for VMware is obtained from HPE electronically. Minimum configuration requirements for the vns VMware environment Two hosts per data center. NOTE: For fault tolerant purposes, the best practice is to have as many hosts as the maximum number of CPUs, Storage CLIMs, IP CLIMs, and Telco CLIMs. For example, a vns system with four CPUs, two IP CLIMs, and six Storage CLIMs would need six hosts. Four networks that are port groups. These port group networks are used for the: Maintenance LAN External connectivity X fabric Y fabric Two datastores that use separate storage hardware for fault tolerant purposes. 56 Installing Virtualized NonStop software and tools on VMware

57 Verify that the requirements for vns on VMware are met Prior to deployment verify that the following software and configuration requirements are met. Verify these software requirements are met: VMware vsphere Hypervisor (ESXi) 6.5 or later is installed on the compute nodes. VMware vcenter Server 6.5 or later is installed as a vcenter Server Appliance (vcsa) or on the Virtualized NonStop System Console (vnsc). VMware vrealize Orchestrator Appliance 7.3 or later is deployed. NonStop System Console Installer DVD image (Update 30 or later) is present. The DVD image installs OSM System Console Tools and other system console software on a vnsc Virtual Machine. The vns Deployment for VMware workflow package: com.hpe.vns.package is ready to import into vrealize Orchestrator. IMPORTANT: Delete any previously imported vns deployment packages (com.hpe.vns.package) from vrealize Orchestrator if present. The following vns images are in the vcenter. HSS image file iso file. CLIM image vmdk file. NSK SUT image vmdk file. Verify these configuration requirements are met: The compute nodes have VMware vsphere Hypervisor (ESXi) 6.5 or later installed and have been added to the VMware vcenter Server under either a: datacenter node or cluster node Storage has been set up in vcenter. Each compute host should be able to access at least one datastore for storage. vns image files have been transferred to storage and are visible in vcenter. The predeployment hardware tasks for VMware environment are completed. Network port groups have been setup on compute hosts in vcenter. Limitations The Virtual Machines (VMs) created by the vns VMware workflows do not support migrate, clone, or template commands in the vsphere Web Client. The limitation is due to SR-IOV. For more information, see the "SR-IOV Support" topic in the vsphere Networking Guide located at: docs.vmware.com/en/vmware-vsphere/6.0/vsphere-esxi-vcenter-server-602-networking-guide.pdf. Verify that the requirements for vns on VMware are met 57

58 Fault isolation levels for vns on VMware The vns VMware deployment workflow checks the specified faultzone value to make sure that the placement of vns CPUs and vclims on hosts is consistent with the specified value in the JSON input file and the rules for the faultzone level. The vns Deployment Tools for VMware support the following faultzone levels. faultzone 2 (Default) Description Fault-tolerant rules are enforced on CPUs, CLIMs and NSK volumes. Each CPU VM is required to run in a separate host. The number of CPU VMs must be an even number. The cpunumber of the CPU VM is required to be a value from 0 to <numberofcpu>-1. For example, if the (<numberofcpu> is 6), the cpunumber of the CPU VM has to be 0, 1, 2, 3, 4, 5 (from 0 to 5). No two CLIM VMs of the same type can run in the same host. VMs of the same type must run in different hosts. NSK volume is required to have both primaryclim and mirrorclim to provide both paths to the volume. 1 Fault-tolerant rules are enforced only on CPUs. Each CPU VM is required to run in a separate host. The number of CPU VMs must be an even number. The cpunumber of the CPU VM is required to be a value from 0 to <numberofcpu>-1 Two CLIM VMs of the same type are not required to run in different hosts. NSK volume is not required to have both primaryclim and mirrorclim to provide both paths to the volume. 0 No fault isolation. No fault-tolerant rules are enforced. Each CPU VM is not required to run in a separate host. The number of CPU VMs does not have to be an even number. The cpunumber of the CPU VM is not required to be a value from 0 to <numberofcpu>-1. Two CLIM VMs of the same type are not required to run in different hosts. NSK volume is not required to have both primaryclim and mirrorclim to provide both paths to the volume. 58 Fault isolation levels for vns on VMware

59 Considerations for the faultzone level when modifying a deployment in VMware Generally when modifications are made to an existing deployment and include modifications to the JSON iput file, the faultzone level should be set to 0. Setting the faultzone to 0, ensures that changes are only made to VMs that are powered off before the workflow is started; powered-on VMs are skipped. However entries are still logged in the Orchestator workflow log. Review the applicable entries in the Orchestrator work flow to ensure that changes do not violate the placement rules for a previous faultzone level. Import Virtualized NonStop images into the VMware environment NOTE: The core license file will also need to be installed on the vns system after NSK is running. To install the core license, use the Install Core License guided procedure found on the system object in the OSM Service Connection. vns component Initial delivery Updates SUT VMDK image BACKUP format Core license file File File vclim software VMDK image Installer vnsc software ISO image Installer Halted State Services (HSS) initial boot OS ISO image ISO image Independent Products Various formats Various formats Software Product revisions (SPRs) - Files vns deployment tools Zipped file collection Zipped file collection The VMware deployment tools are delivered as a zipped file collection. The zip file contains the vns deployment package com.hpe.vns.package with workflows and a sample JSON input file. Once the images have been acquired from HPE, upload the images to a datastore in the datacenter using an sftp command or through the vsphere web client. Predeployment hardware tasks for VMware environment Prerequisites The following procedure uses a Gen9 server as the Compute Node. Adjust the procedure steps for your hardware environment. Procedure 1. To get the Compute Nodes up and running for the VMware environment, use the ilo Integrated Remote Console to configure the following options in the BIOS settings. For information on using the Import Virtualized NonStop images into the VMware environment 59

60 ilo Integrated Remote Console, see UEFI System Utilities User Guide for HPE ProLiant Gen9 Servers and HPE Synergy. a. For System Options -> Virtualization Options, ensure that Virtualization Technology, Intel VT-D, and SR-IOV are enabled. b. For System Options -> Processor Options, ensure that Intel (R) Hyperthreading is Enabled, Processor Core Disable is set to 0 (will Enable All Cores), and Processor x2apic Support is Enabled. c. For Performance Options, set Intel (R) Turbo Boost Technology to Enabled. d. For Power Management Options, ensure that Power Profile is set to Maximum Performance. e. For Power Management in Advanced Power Options, ensure that Intel QPI Link Enablement is [Auto], Dynamic Power Savings Mode Response is [Fast], Collaborative Power Control is [Enabled], Redundant Power Supply Mode is [Balanced Mode]. 2. Verify that ConnectX-4 driver version ( or greater) is installed on each Compute Node. [root@localhost:~] esxcli software vib list grep nmlx5-core nmlx5-core OEM MEL VMwareCertified If the correct drivers are not installed, download them from compatibility/detail.php?devicecategory=io&productid= NOTE: A reboot of the Compute Node is required after the ConnectX-4 driver is installed. 3. Once the ConnectX-4 drivers are set up, verify the Mellanox Software Tools (MST) version. a. Verify that the MST version ( or greater) vsphere Installation Bundle (VIB) package is installed on the Compute Node. [root@localhost:~] esxcli software vib list grep mst nmst OEM MEL PartnerSupported If the correct version is not installed, download it from management_tools. b. To install the MST VIB package, scp the VIB package to your Compute Node and issue the following command: [root@localhost :~] esxcli software vib install -v <fully-qualified-path-of-mst-vibfile> NOTE: If you will be installing Mellanox Firmware Tools (MFT), you can skip step 3c and reboot the Compute Node after completing the MFT installation. c. Reboot the Compute Node. 4. Verify the Mellanox Firmware Tools (MFT) version. a. Verify that the MFT version ( or greater) vsphere Installation Bundle (VIB) package is installed on the Compute Node. [root@localhost:~] esxcli software vib list grep mft mft Mellanox PartnerSupported Installing Virtualized NonStop software and tools on VMware

61 If the correct version is not installed, download it from management_tools. b. To install the MFT VIB package, scp the VIB package to your Compute Node and issue the following command. :~] esxcli software vib install -v <fully-qualified-path-of-mft-vibfile> c. Reboot the Compute Node. 5. Configure the ConnectX-4 driver settings for MSIX and Link Type. a. If MST is not started, start it by issuing: /opt/mellanox/bin/mst start. b. Identify all ConnectX-4 Cards on the Compute Node by issuing the./mst status command. cd /opt/mellanox/bin status MST devices: mt4115_pciconf0 <- MST device name of the first ConnectX-4 Card installed mt4115_pciconf1 <- MST device name of the second ConnectX-4 Card installed c. For each MST device name identified in the previous step, use the mlxconfig utility to set the NUM_VF_MSIX, LINK_TYPE_P1, LINK_TYPE_P2, SRIOV_EN, and NUM_OF_VFS parameters to the following values: -d <mst-device-name-of-connectx-4-card> set NUM_VF_MSIX=30 LINK_TYPE_P1=ETH LINK_TYPE_P2=ETH SRIOV_EN=True NUM_OF_VFS=5 IMPORTANT: As shown in the previous example, the value in NUM_OF_VFS with must be one more than the max_vfs specified in the esxcli system module parameters set -m nmlx5_core -p max_vfs=4,4,4,4 command that is used to configure ESXi OS settings for the ConnectX-4 cards. d. For each MST device, verify that the parameters were set as described in the previous step. -d <mst-device-name-of-connectx-4-card> q e. Reboot the Compute Node. 6. Configure the ESXi OS settings for the ConnectX-4 cards. a. Enable SR-IOV for each ConnectX-4 card by setting the maximum VFs for each port on the card to 4. [root@localhost:/opt/mellanox/bin] esxcli system module parameters set -m nmlx5_core -p max_vfs=4,4,4,4. Installing Virtualized NonStop software and tools on VMware 61

62 NOTE: Each 4 in the max_vfs field corresponds to one port on a ConnectX-4 card. Each card has two ports. For one card, max_vfs=4,4. For two cards, max_vfs=4,4,4,4. For three cards, max_vfs=4,4,4,4,4,4. b. Verify that max_vfs is set correctly. [root@localhost:/opt/mellanox/bin] esxcli system module parameters list -m nmlx5_core Name Type Value Description max_vfs uint 4,4,4,4 Number of PCI VFs to initialize Values : 0-16, 0 - disabled Default: 0 [root@localhost:/opt/mellanox/bin] c. Reboot the Compute Node. 7. Use the following command to verify that four Virtual Functions display for each port on each ConnectX-4 card installed on the Compute Node. 62 Installing Virtualized NonStop software and tools on VMware

63 Failover support for vns network interface cards (NICs) in a VMware environment NIC sharing type PCI passthrough SR-IOV vmxnet3 Level of support When configured for PCI Passthrough, supports both CLIM-to-CLIM failover and Interface Bonding failover. When configured for SR-IOV, supports CLIM-to-CLIM failover. Does not support Interface Bonding failover. Does not support CLIM-to-CLIM failover or Interface Bonding failover. Import vns package and set up Orchestrator Prerequisites Ensure that you have met the Requirements for vns on VMware. Procedure 1. Import vns deployment package com.hpe.vns.package to VMware vrealize Orchestrator. a. To install the Orchestrator workflow package, start VMware vrealize Orchestrator and select Design mode in the top dropdown menu. b. Click the Packages icon (top left). c. Right-click in an empty area and select Import package. d. Browse to the com.hpe.vns folder. Click Open. 2. Set up Orchestrator for Virtual Machine (VM) tagging by performing the following steps. a. Select Start workflow VAPI/Import vapi metamodel. The Start Workflow: Import vapi metamodel dialog displays. Failover support for vns network interface cards (NICs) in a VMware environment 63

64 b. In the dialog, enter the following. In vapi endpoint URL, enter the applicable vcenter IP address for your installation. The address must follow the form ip>/api. Click Yes to Do you want to secure protocol connection? For User name and Password, enter the Single Sign-on User Name and Password. Click No to Do you want to ignore certificate warnings? Click Yes to Do you want to add vapi endpoint using the same settings? c. To run the workflow, click Submit. The following table shows the tags that are created for each Virtual Machine type and uses a system with Expand Node Number 11 as an example. The tag category is vns. VM Type Type Tag System Tag CPU CPU ExpandNode-011 Storage CLIM SCLIM ExpandNode-011 Table Continued 64 Installing Virtualized NonStop software and tools on VMware

65 VM Type Type Tag System Tag IP CLIM NCLIM ExpandNode-011 Telco CLIM OCLIM ExpandNode Add the vsphere vcenter Server Plug-in to Orchestrator. Plan and create the vns system for a VMware environment A sample JSON specification file (vnsdemo.json) is included in the vns VMware installation package. The sample file provides the global system attributes and arrays for the CPUs and CLIMs and is the input to the package workflows. Using a text editor such as Notepad or WordPad, you edit the sample file to specify your vns system. IMPORTANT: Do not use Microsoft Word to edit the vnsdemo.json file. Only use a text editor such as Notepad or WordPad to edit the file and ensure that you save the file in text format. Prerequisites Ensure that you have met the Requirements for vns on VMware. Procedure 1. To access the sample JSON file, select Design mode in Orchestrator. Click the Resources tab and expand the vns node in the left tree pane. Right-click vnsdemo.json and select Save to File. To save the file for local access, select a location. Click Save. 2. Using a text editor, edit the JSON file to specify your vns system. For file details, see Supported attributes for JSON specification file for VMware. Plan and create the vns system for a VMware environment 65

66 3. When you have finished specifying the vns system, select Run mode in Orchestrator. Select the Workflows tab and expand the left tree pane to view the vns node. Expand the vns node to view the vns workflows. 4. To create the vns CPUs, storage volumes, and vclims, run the Create System workflow by clicking the Run icon (green arrow). A dialog displays with a prompt to the JSON file name. Click the prompt and browse to the configuration file that you previously edited. Click Submit. For more information about the VNS VMware workflow run, see the workflow run log generated by Orchestrator. Supported attributes for JSON specification file for VMware The following specification tables describe the supported attributes for the JavaScript Object Notation (JSON) specification file. System specification attributes in the JSON file for VMware CPU specification for VMware (JSON file) CLIM specification for VMware (JSON file) Network interface object attributes for VMware (JSON file) Storage volume specification for VMware (JSON file) System specification attributes in the JSON file for VMware Attribute expandnodenumber sysname sysserial sysclass networkupperoctet datacentername mlannetworkname mlannetmask Description The new Expand Node Number for the system to use. A valid value is between The Virtualized NonStop system (vns) name without a backslash ( \ ). A string of character length between 1-7. The system serial number. This number must be a five or six digit decimal number, zero-filled to left. The class of the vns system. Valid values are Entry or High. The uppermost octet of the system network addresses. The default is 10. HPE does not recommend changing the default value. The name of the VMware data center that will have the vns virtual machines. The name of the VMware Port Group for the Maintenance LAN. The IPv4 network mask of the Maintenance LAN. Table Continued 66 Supported attributes for JSON specification file for VMware

67 Attribute ztcp0 ztcp1 climimagedatacenter climimagepath meuclimdisksizegb sriovxfabric sriovyfabric cpucores Description The IPv4 address of the $ZTCP0 stack. The IPv4 address of the $ZTCP1 stack. The name of the VMware data center that has the vclim software. The path to the vclim software. Storage volume size in GB allocated to the MEU CLIMs (SCLIM000 and SCLIM001). The name of the Port Group to be used for the X fabric. The default is X Fabric Port Group. The name of the Port Group to be used for the Y fabric. The default is Y Fabric Port Group. The number of cores for each CPU. Must match the license installed on the system. Valid values are 1,2,4,6. Default is 4. 1 is an Entry class system 2, 4, or 6 is a High end system cpuramgb The CPU GB memory size. Optional. Entry class valid values are 32GB or 64GB. Default is 32GB. High end valid values are 64GB or 192GB. Default is 64GB. cpus clims nskvols An array of objects specifying the CPUs in the system. For more information, see CPU specification for VMware (JSON file). An array of objects specifying the CLIMs in the system. For more information, see CLIM specification for VMware (JSON file). An array of objects specifying the storage volumes in the system. For more information, see Storage volume specification for VMware (JSON file). Table Continued Installing Virtualized NonStop software and tools on VMware 67

68 Attribute poweron faultzone Description Indicates whether the CPU and CLIM VMs is to be powered on at the successful conclusion of the workflow. Default is false. Level of fault isolation (see Fault isolation levels for vns on VMware. Valid values are 0, 1, 2. Default is 2. CPU specification for VMware (JSON file) Every CPU to be added or modified in the system is described by an entry in the cpus array of the JavaScript Object Notation (JSON) file. The CPU Virtual Machine to be created uses this format: <sysname>_cpu<cpunumber> Attribute cpunumber hostip datastorename pciaddress Description The CPU number in the Virtualized NonStop system. A valid value is between The IPv4 address of the VMware host. The name of the VMware data store that has the CPU memory, the VM configuration file, the serialout file, and other files. The PCI address of the SR-IOV enabled hardware for system interconnect on the host. CLIM specification for VMware (JSON file) Every CLIM to be added or modified in the system is described by an entry in the clims array of the JavaScript Object Notation (JSON) file. The vclim Virtual Machine to be created uses this format: <sysname>_type<climnumber> NOTE: Storage CLIMs are allocated 4GB of memory. IP/Telco CLIMs are allocated 16GB of memory. Attribute type Description CLIM type. Valid values are the following. SCLIM for a Storage CLIM. NCLIM for an IP CLIM. OCLIM for a Telco CLIM. climnumber hostip Unique CLIM number (integer) which will be included in the name of the Virtual Machine. IPv4 address of the VMware host. Table Continued 68 CPU specification for VMware (JSON file)

69 Attribute datastorename pciaddress Description Name of the VMware datastore that has the CLIM memory, the VM configuration file, the serial-out file, and other files. PCI address of the SR-IOV enabled hardware for system interconnect on the host. cores Number of cores for the CLIM (valid values are 4 or 8). eth0 networkinterfaces IPv4 address of the Eth0 port on the CLIM that will connect to the Maintenance LAN. An array of network interface objects for the CLIMs. Storage CLIMs support up to two network interfaces. IP and Telco CLIMs support up to five network interfaces. For more information about the network interface object properties, see Network interface object attributes for VMware (JSON file). Network interface object attributes for VMware (JSON file) The following table describes the network interface object attributes associated with a CLIM. For more information about the networkinterfaces attribute, see CLIM specification for VMware (JSON file). Attribute interfacename Description The name of the network interface. For VMware, valid interface names are the following. Storage CLIM: eth1 or eth2. IP or Telco CLIM: eth1-eth5. pciaddress networkname macaddress PCI Address of hardware on the VMware host for the network interface. Required for PCI Passthrough or SR-IOV; not applicable for VMXNET3. The name of the VMware Port Group. Required for VMXNET3 or SR-IOV; not applicable for PCI Passthrough. The MAC address of the Network Interface Controller Port (for example, 0000:81:00.0). Required for PCI Passthrough. Installing Virtualized NonStop software and tools on VMware 69

70 Storage volume specification for VMware (JSON file) Every NSK storage volume to be added or modified in the system is described by an entry in the nskvols array of the JavaScript Object Notation (JSON) file. NOTE: Ensure that imagedatacenter and imagepath properties are specified for the $SYSTEM volume and use the SUT, which is the VMDK file. Attribute Name sizegb Description Name of the NSK volume, including $ in the first character (8 character maximum). Supported volume size in GB. $SYSTEM: GB Other volumes: 1-600GB primaryclim mirrorclim imagedatacenter imagepath NSK name of the CLIM (not the VM name) providing the primary path to the volume. NSK name of the CLIM (not the VM name) providing the mirror path to the volume. Name of VMware datacenter that has the initial disk. If the imagedatacenter attribute is left blank, an empty disk is created. The path to the disk image on the datastore. Creating a Virtualized NonStop System (vnsc) for VMware IMPORTANT: If you already have a Windows Server VM for the vnsc, install NSC DVD 30 or later on a physical DVD drive or copy the DVD image file to the datastore. Then skip to step 6c in the following procedure. Procedure 1. Using VMware vsphere Web Client, upload Windows Server 2012 or 2016 image file to a datastore in your datacenter. 2. In the Navigator pane, expand the datacenter that will contain the NSC Virtual Machine (VM). Select a Host. Select New Virtual Machine. The New Virtual Machine dialog displays. The New Virtual Machine dialog provides a series of steps in the left pane. 70 Storage volume specification for VMware (JSON file)

71 In the New Virtual Machine dialog box, perform the steps by selecting the following. a. 1a Select a creation type and Create a new virtual machine. Click Next. 2a Select a name and folder. Enter a name for the VM. Click Next. b. 2b Select a compute resource. Select the applicable host and ensure "Compatibility checks succeeded" displays in the Compatibility box. c. 2c Select storage. From the drop-down menu, select the applicable storage policy. Select the destination datastore from the displayed list. Ensure "Compatibility checks succeeded" displays in the Compatibility box. d. 2d Select compatibility. Select ESXi 6.5 and later. Click Next. e. 2e Select a guest OS. Select the following options for the guest OS. Guest OS Family: Windows Guest OS Version: Applicable server version that corresponds to the image file uploaded in step 2 of this procedure. f. 2f Customize hardware. The Customize hardware selection has three tabs: Virtual Hardware, VM Options, and SDRS Rules. In the Virtual Hardware tab, expand CPU and select the following to configure the CPU VMs. CPU: 2. Cores per Socket: 2. Memory: 8GB. Installing Virtualized NonStop software and tools on VMware 71

72 Hard Disk 1: 40GB (40GB is minimum required). Connect Network adapter 1 to the VM Network for the Maintenance LAN. Small Computer System Interface (SCSI) Adapter: SCSI Controller 0 (LIS Logic SAS). Video Card: Auto-detect. Click Next. 3. Select 3 Ready to Complete if all settings are correct. Click Finish. 4. Power on the VM. Launch a VMware Remote or Web Console to the new Windows VM. 5. Complete the following customization steps. a. Select the Windows Language and click Next. b. Select the Operating System to install. c. Select Accept the License Agreement. d. Select Custom Install (for the new Windows installation). e. Select the Windows installation location (typically, Drive 0). f. Set the Administrator password when prompted. Click Finish. g. After Windows installation completes, you are prompted to enter Ctrl+Alt+Delete. If using the VMware Remote Console, click the icon (that resembles three keys) in the windows toolbar to issue the command sequence. h. Enter the administrator password and answer any remaining customization prompts. 6. Install the VMware Tools and install the NSC DVD image files. a. In the vsphere Web Client, click Install VMware Tools link. In the displayed dialog box, click Mount. b. In the VMware Remote Console, run the VMware Tools installer mounted on the DVD drive. When installation completes, restart the vnsc. TIP: After the NSC restarts, you can use Remote Desktop rather than VMware Remote Console to connect to the vnsc. c. In the vsphere Web Client, locate the CD/DVD drive, click the connection icon, and select Disconnect to dismount the Windows ISO file. d. Select Connect to CD/DVD image on a datastore. In the Select File dialog box, locate the NonStop NSC DVD image file. e. See the NonStop System Console Installer Guide for instructions on installing the NonStop System Console Utilities. Managing vns tasks on VMware Creating a vns system in VMware To create a vns system in VMware, see Plan and create the vns system for a VMware environment. 72 Managing vns tasks on VMware

73 Removing or reducing a vns system in VMware To remove a vns system in VMware, use the existing VMware mechanisms. NonStop CPUs must be removed in pairs starting with the highest number CPU. For example, to reduce an eight CPU system to a four CPU system, you would remove CPUs 7 and 6 and then CPUs 5 and 4. Expanding a vns system in VMware Adding CPUs and CLIMs in VMware 1. To add CPUs and/or CLIMs, create a JSON file that includes only the new CPUs and/or CLIMs and the system specification. 2. Set the faultzone value in JSON file to 0 to suppress fault isolation checking. 3. Run the Create System workflow to create the new CPU and/or CLIM VMs. After running the workflow, launch the OSM System Configuration Tool to add the new CPU and/or CLIM VMs to the vns system. Adding storage volumes in VMware 1. To add a storage volume, create a JSON file that includes only the new NSK volumes and the system specification. 2. Set the faultzone value in JSON file to 0 to suppress fault isolation checking. 3. Power off the CLIMs that are specified as primaryclim and mirrorclim for the new added volume. Run the Create System workflow again. (Optional) If you do not want to bring down the CLIMs, use the VMware vsphere Web Client to add new hard disks to the applicable primaryclim and mirrorclim. Select the VMs for the Storage CLIMs and select the Edit Settings action. Ensure that the hard disk properties are as follows: Size in gigabytes. Note that this value will be multiplied by , while the value specified in the JSON file is multiplied by , so the largest value that can be specified is 558. Type: Thick provision eager zeroed. Shares: High Disk Mode: Independent -- Persistent Virtual Device Node: Select or create a SCSI controller that is of type: VMware Paravirtual. A maximum of 15 disks can be attached to one SCSI controller. 4. After the new disks have been created, run LUNMGR to complete the disk configuration. Redeploying a vns system in VMware 1. To redeploy the vns system in VMware, change the Expand node number of the NonStop system by changing the ExpandNodeNumber in the JSON specification file to the new number. In this case, a minimal JSON file can be used. The following properties are required: sysname, datacentername, expandnodenumber, cpus, cpunumber, clims, type, climnumber 2. Run the Change System ExpandNodeNumber workflow (VMs must be powered off before running the workflow). Installing Virtualized NonStop software and tools on VMware 73

74 Deploying the Virtualized NonStop System IMPORTANT: Make sure that you complete Mandatory prerequisites for a vns system in an OpenStack private cloud on page 34 before proceeding with deployment. Procedure 1. Log on to Horizon and select your vns project. Select Project->NonStop->Servers. Click Launch System. The initial user interface displays. 2. The Enter the Virtualized NonStop system details dialog appears. For more information, click? to bring up the help dialog with instructions. a. In System Name, enter the system name. System name must be alphanumeric, less than 8 characters, and begin with a letter. There is no requirement for the \ character before System Name. b. In Expand Node Number, enter the Expand Node Number. A valid Expand node number is between c. Select the Initial Image for HSS CPU image to be used to boot the CPUs. The same HSS image is used for all CPUs. d. Select the Initial Image for CLIM image to be used to initially boot the CLIMs. The same CLIM image will be used for all CLIMs. e. (OPTIONAL) Select the CLIM Boot Volume Type for the CLIM boot disks. The same volume type will be used for all CLIM boot disks. f. From the Virtualized NonStop System Class dropdown, select the class of the system to be created. The system class must match the Core License file that will be installed on the system. g. In Virtualized NonStop System Serial Number, enter the system serial number to be used for this system. This must match the system serial number in the Core License file that will be installed on the system. 3. Click Next. The Select the desired Fault Isolation level for this system dialog appears. IMPORTANT: If you have not planned your fault isolation level, or need more information about fault isolation, see Fault zone isolation options for vns on page 31. From the Fault Isolation drop-down menu, select one of the following: NonStop Standard CPUs Only None 74 Deploying the Virtualized NonStop System

75 4. After selecting your Fault Isolation level for the system (and, if applicable, availability zones), click Next. The Set Fabric Networks dialog appears. Click? to launch the instructions for the help dialog. 5. After selecting your Fabric Networks for the system, click Next. The Select which network you would like to use for the Maintenance LAN dialog appears. Select an existing network that uses IPv4 to be the Maintenance LAN for the vns system. NOTE: If you have more than one subnet (optional) the Select subnet: field displays the subnets. 6. Enter the IP addresses in the $ZTCP0 IP Address (IPv4 format) and the $ZTCP1 IP Address (IPv4 format) fields. HPE recommends that you use the same default IP addresses ( and ) as physical NonStop X systems. These default IP addresses are shown in this example dialog. Deploying the Virtualized NonStop System 75

76 TIP: If two or more deployed vns systems use the same Maintenance LAN and subnet configured in OpenStack, see the procedure for adjusting IP addresses described in the NonStop Dedicated Service LAN Configuration and Management Manual for NonStop X Systems. That procedure also applies to vns systems. 7. Click Next. The Select the number of logical CPUs for the vns system and the flavor of the CPUs dialog appears. For more information about the dialog, click? to bring up the help dialog with instructions. For more information about flavors, see Flavor management for Virtualized NonStop virtual machines on page 32. NOTE: Ensure your license(s) allow your CPU count and flavor selections. a. In CPU Count, enter the number of NonStop logical processors to be created for the vns system. b. Under CPU Flavor, select a flavor name. The same flavor name will be used for all logical processors. 76 Deploying the Virtualized NonStop System

77 8. Configure a minimum of two Storage vclims named SCLIM000 and SCLIM001. From the CLIM Type dropdown menu, select Storage. a. Expand the bottom of the dialog to display CLIM Flavor. Select the radio button that has the flavor for SCLIM000. b. In Maintenance LAN IP Address for this CLIM, enter the IP address for the virtualized SCLIM000. HPE recommends that you use the same default IP address ( ) as a physical SCLIM000. c. Under the IP address, click Add New. If you want to cancel adding the CLIM, click Reset. Deploying the Virtualized NonStop System 77

78 The first required Storage CLIM (SCLIM000) appears in CLIMs to be created. 9. The initial CLIM type dialog box reappears and the CLIMs selection remains selected in the left pane. Add the second required Storage vclim (SCLIM001) with a new Maintenance LAN IP address (the SCLIM001 vclim uses the same flavor as the SCLIM000 vclim). HPE recommends that you use the same default IP address ( ) as a physical SCLIM Deploying the Virtualized NonStop System

79 10. Under the IP address, click Add New. The second required Storage vclim (SCLIM001) appears in CLIMs to be created along with previously added first Storage vclim. TIP: If you have only two Storage vclims, proceed to the next step to add IP vclims. Or, continue adding the number of Storage vclims allowed by your license (with distinct IP addresses). Remember to select Add New after filling in the entries for each Storage vclim in the dialog. 11. After you click Add New for the last Storage vclim, select IP for the vclim type. Add a minimum of two vclims named NCLIM000 and NCLIM001. Deploying the Virtualized NonStop System 79

80 a. Expand the bottom of the dialog to display CLIM Flavor. Select the radio button that has the flavor for NCLIM000. b. HPE recommends that you enter Maintenance LAN IP address for the first CLIM (IP or Telco) named NCLIM000. c. Under the IP address, click Add New. The first vclim (IP or Telco) NCLIM000 appears in CLIMs to be created along with the previously added Storage vclims. 80 Deploying the Virtualized NonStop System

81 12. HPE recommends that you enter Maintenance LAN IP address for the second vclim (IP or Telco) named NCLIM001. a. Under the IP address, click Add New. The second vclim (IP or Telco) NCLIM000 appears in CLIMs to be created along with the previously added vclims. At this point, the minimum required amount of vclims (2 Storage, 2 IP) have been added to create a vns system as shown in this example dialog. Deploying the Virtualized NonStop System 81

82 b. Repeat the same process for each additional vclim beyond the two required storage and IP vclims. c. Click Next. 13. The Enter the requested information for each Volume you would like to create dialog appears. This dialog lets you add OpenStack Volumes (block storage) and attach them to Storage vclims. a. The first OpenStack Volume you must create is $SYSTEM which is required and predefined. In this example dialog, $SYSTEM is shown in Name for this Volume. b. In Size for Volume in Gigabytes, enter a value. If a mirrored pair is used, the same size is used for both halves. HPE recommends using full provisioning as thin provisioning can significantly impact performance. The dialog shows a value of 100 just as an example. If your OpenStack administrator has created multiple volume "Types" in OpenStack Cinder block storage, for different storage options, you can select the appropriate Type for your Volume. c. In Type for this Volume, select the radio button that corresponds with your storage option. Click Add New to add $SYSTEM as the first volume. NOTE: Due to OpenStack restrictions, you can attach an OpenStack Volume to only one Storage vclim. NonStop Backup and Mirror Backup paths are not supported. 82 Deploying the Virtualized NonStop System

83 d. In Initial Image for this Volume, select an image to upload onto the disk when it is created. This is an optional step for all disks other than $SYSTEM. e. In Primary CLIM, select the CLIM where the primary disk will be attached. f. In Mirror CLIM, select the CLIM where the mirror disk will be attached. This is an optional step for all disks other than $SYSTEM. If no Mirror CLIM is selected, an unmirrored volume is created. 14. After entering all information for the volume, Click Add New. To cancel adding the volume, click Reset. 15. Create the $AUDIT volume. You will need $AUDIT for later installation steps involving TMF, DSMSCM, and KMSF. a. In Name for this Volume, enter $AUDIT. b. In Size for Volume in Gigabytes, enter a value. The dialog shows a value of 600 just as an example. c. In Type for this Volume, select the radio button that corresponds with your storage option. NOTE: The selected volume type corresponds to raid-0 with full provisioning in the storage backend. d. Click Add New which adds $AUDIT as the second volume after $SYSTEM. Deploying the Virtualized NonStop System 83

84 16. To add all volumes, repeat these steps. 17. Click Next. The Select CLIM to configure network interfaces dialog appears. Click? to launch the help dialog and follow the instructions in the help to complete this dialog. a. From the Select VNIC port type drop-down, select one of the following VNIC port types to be created on the network: normal provides a default VirtIO port in OpenStack direct provides an SR-IOV passthrough port direct-physical provides a PCI passthrough port For more information, click? to bring up the help dialog. b. After selecting the network to be added to the CLIM, click Attach. To cancel adding the network, click Reset. Repeat this step until all networks are added. 84 Deploying the Virtualized NonStop System

85 c. (Optional) You can save the currently specified configuration file into a YAML file on your local system. The YAML file can be used by the command-line interface to create the system. Select Save Configuration in the left pane. d. After all networks are added, click Create System to deploy the OpenStack instances, volumes, and other components for your vns system. A minimum system (2 CPUs, 2 IP/Telco CLIMs, 2 Storage CLIMs, and 1 vnsc) takes about minutes to be created as long as the administrator is familiar with the process and the number of disk volumes. Larger systems or more disk volumes, and so on, can increase the create time. e. After Create System has completed, a new vns system appears under the Project->NonStop- >Servers screen. The OpenStack components for vns CPUs, vclims, and Volumes appear in your project as Project->Compute->Instances and Volumes. The NonStop system name will appear as the first part of the name of these instances and volumes. NOTE: Although the vnsc is required for a minimum system, it is not created during "create system" and must be created separately. For more information, see Creating a vnsc. NOTE: Occasionally, during the system create process one or more of the newly created virtual machines will shut down. If a storage CLIM virtual machine powers off, this can cause the system creation process to fail with an error when attempting to attach the NSK volumes to the powered off storage CLIM. To prevent this, use the Project->Compute->Instances panel in the Horizon dashboard to view the state of the virtual machines. If any shut down, click Start Instance to restart it. Post-deployment procedures for vclims Prerequisites NOTE: These procedures are not required if you are using L17.08 or later versions of both the NonStop Deployment Tools and the CLIM DVD. Perform this procedure on each vclim. This procedure is done inside the vns systems project. Procedure 1. Log in to Horizon. Select Project>Instances and then the vclim (for example, NCLIM000). There are two interfaces for the vclim: Log A virtual serial port Post-deployment procedures for vclims 85

86 Provides diagnostic logs through vclim boot and run. Can be used to troubleshoot unresponsive vclims. Only intended for authorized service providers or administrators Console Has similar functionality as ilo remote console and provides a login shell to the vclim with a virtual display. Figure 6: Login screen for vclim 2. Using the Horizon console login shell, configure each vclim s eth0 address by logging in with the vclim's login credentials. NOTE: If you are using L17.08 or later versions of both the NonStop Deployment Tools and the CLIM DVD, this step is not required. The CLIM software automatically performs this step for you. user: root password: hpnonstop Example 6 Configuring eth0 on vclim climconfig interface add eth0 climconfig ip -add eth0 -ipaddress netmask climconfig interface modify eth0 mtu 1450 NOTE: The IP address should be the same as the port allocate IP address. Use the MTU that you recorded earlier during Planning tasks for a Virtualized NonStop system in an OpenStack private cloud on page Do not configure any other ports until the vclim is in a STARTED state. The vclim configuration is not complete until the vclim and other VMs are configured through the vns System Configuration Tool and the system is coldloaded. 4. Repeat steps 1-4 on each vclim. Once completed, proceed to Configuring a provisioned Virtualized NonStop system on page Deploying the Virtualized NonStop System

87 Configuring a provisioned Virtualized NonStop system Prerequisites A provisioned vns system still needs to be configured to make sure that the provisioned vns VMs can communicate. Procedure 1. Log on to the vnsc, preferably using Remote Desktop. From the Windows Server 2012 R2 Start menu on the lower left, click OSM System Configuration Tool to launch the tool. The OSM Console Tools on the vnsc have a new option, Configure a Virtualized NonStop System. Select that option in the initial dialog displayed. Click Next to view the Log on to MEUs dialog. Configuring a provisioned Virtualized NonStop system 87

88 2. Click Next to view the Input System Info dialog. 88 Configuring a provisioned Virtualized NonStop system

89 In this screen, enter the system name you configured in the first dialog of the OpenStack project "Launch New Server" wizard. Enter a \ as a prefix, even though you did not do so in the "Launch New Server" wizard. Then enter the Expand Node Number you chose earlier as the Expand Node Number in the System Configuration Tool. Finally enter the Virtualized NonStop system serial number of the license you selected in the "Launch New Server" wizard. Review these values to be sure that they match what you entered earlier. NOTE: The "Advanced Configuration" button should be ignored as it is not used for a Virtualized NonStop system. 3. Click Next. The Select Number of Processors dialog appears. In number of processors, enter the number of CPUs previously entered in the "NSK CPUs" dialog of the "Launch New Server" wizard. Configuring a provisioned Virtualized NonStop system 89

90 4. Click Next. The Input CLIM info dialog appears. Review and follow the instructions on the dialog. 5. Continue discovering CLIMs until all of the CLIMs are listed in the pane at the bottom of the dialog. NOTE: Ignore the Change ilo Config since it is not used for Virtualized NonStop. 90 Configuring a provisioned Virtualized NonStop system

91 6. Click Next to see a summary of system information. If the information is as expected, click Next again to perform the configuration steps on the CLIMs and MEUs. 7. Now that the vns system has been provisioned and configured, you can boot from HSS into NSK using the System Startup Tool. TIP: You can use the System Startup Tool to connect to the CLIMs and MEUs the same way as physical NonStop X systems. For more information, see the online help within the OSM System Startup Tool. a. Select Start System in the Operations menu. A dialog like the following appears. Configuring a provisioned Virtualized NonStop system 91

92 b. Make only these changes in the System Startup dialog box Under SYSnn and CIIN Option, enter 00 in SYSnn: (this is for initial system load) Under Configuration File, select Saved Version (CONFxxyy) and enter 0000 (which results in CONF0000) You can modify LUNs for the $SYSTEM primary drive or mirror drive by double-clicking the row of the relevant CLIM. This brings up an additional dialog for altering the LUN. 8. Click System Start. After a few minutes, the MR-Win6530, CLCI, and CNSL windows should appear. At this point, you need to configure initial NSK settings and perform the necessary vclim and SCF configuration to recognize the attached OpenStack disk volumes as NonStop disks and network interfaces on vclims as configured with NSK CIP providers in SCF as described in Booting the vns system on page 92. Booting the vns system Prerequisites Ensure that you have ordered a SUT (E-media). You will need it to perform this procedure. 92 Booting the vns system

93 Procedure 1. The system will first boot with a minimal configuration. The provisioned vclims and volumes will need to be configured in SCF so that they are available for use in NSK. 2. Configure additional network interfaces on the vclims. 3. Configure IP providers in SCF. 4. Add additional vclims to SCF. 5. Run LUNMGR on each Storage vclim. For more information on adding LUNs and other LUN Manager commands, see LUN Manager commands for virtualized environments on page Add disks to SCF. For more information, see the SCF Reference Manual for the Storage Subsystem. 7. Make sure that the time and time zone offset are correct. 8. After setting up NonStop volumes, processor swap files should be added for the Virtualized NonStop processors, using limited space on $SYSTEM and additional space on any KMSF swap volumes you have created in OpenStack, and then configured in vclims (accept LUNs for the vclim storage devices) and in the SCF Storage Subsystem. 9. Save your new SCF configuration as some valid CONFxx.yy so that it can be referred to explicitly at a later system load. 10. TMF should be configured and started. For more information, see the TMF Planning and Configuration Guide. 11. SQL/MP should be initialized. For more information, see the SQL/MP Installation and Management Guide. 12. If necessary, configure and start OSS. For more information, see the Open System Services Installation Guide. 13. DSMSCM should generally be moved to the configured $DSMSCM from the custom $SYSTEM disk image for your system, so that DSM/SCM can be configured and started. 14. NSK users should be added, and other basic NSK setup should be performed. 15. Use NonStop Software Essentials to install or update any desired SPRs. You will also want to use NSE to perform a Receive Software and an initial Build/Apply with the SUT as base software level and install any applicable SPRs. Once the vclims and volumes have been configured, NSE can be used to install any additional SPRs that are needed by the customer, or to update any SPRs that need updating. See the applicable software installation and upgrade guide (for example, L18.02 Software Installation and Upgrade Guide). At this point, the system is ready for the customer application to be deployed on it. Configuring a provisioned Virtualized NonStop system 93

94 vns administrator tasks Scenarios for vns administrators These topics describe some administrator tasks or considerations for the vns system. Shutting down the vns system and vclims Reviewing maximum transmission unit (MTU) in OpenStack on page 97 Managing vns resources on page 94 Scenarios that prompt reprovision of resources on page 94 TIP: For detailed vns OpenStack CLI commands, actions, and workflows for the vns system and resources, see: vns OpenStack CLI commands Horizon interface for vns Managing vns resources $AUDIT is needed for TMF. TMF must be started in order to use the DSMSCM tool for installing new versions of software on the NonStop system. The initial configuration of the DSM/SCM database will made available in a customized $SYSTEM image for the initially ordered RVU, based on a customer's purchased software. You will be able to move the initial DSM/SCM database to the $DSMSCM volume, to provide additional storage for installing and keeping track of software configuration changes such as new RVUs. KMSF volumes will be configured for processor swap space after the Virtualized NonStop system is first started, using the NSKCOM KMSF tool. Some initial swap files may be placed on $system, but large processor memory sizes and a large number of NonStop logical processors will create a need for one or more KMSF volumes. Scenarios that prompt reprovision of resources Scenario A system resource is not working HSS update Expand node number has changed Options vclims, vns CPUs, and NSK volumes can be reprovisioned. vns CPU VMs are deleted and re-created using the updated HSS ISO image from Glance. CPU identity is passed in to the CPU by OpenStack and cannot be changed after the VM is created. Table Continued 94 vns administrator tasks

95 Scenario vclim reimage Disk errors Options Can pass an optional parameter to specify a vclim image, resulting in a new boot volume being created. By default, only deletes and recreates the vclim VM using the same boot volume and networks. NSK volume re-provisioning is done one disk at a time. For a mirrored volume, this is one half of the mirrored volume at a time. For more information about reprovisioning resources, see the: Reprovision commands in NS OpenStack CLI commands Reprovision actions and workflows in Horizon interface for vns Save or load vns configuration in the Create System workflow As of L18.02 and later RVUs, the Create System workflow is enhanced to save or load a vns system configuration. This enhancement is only supported for the Create System workflow. Key features of this enhancement include: Creating an initial system and replicating into multiple systems. Saving the current configuration to a YAML file. Loading a previous configuration from a YAML file. NOTE: You cannot generate a YAML file from a vns system that has already been created. For more information about saving or loading a vns configuration, click? in the Create System workflow dialog box. Save or load vns configuration in the Create System workflow 95

96 Shutting down the vns system and vclims To perform an orderly shutdown of the vns system and vclims: Procedure 1. Save the configuration file by issuing SCF SAVE CONFIGURATION xx.yy For more information about this command, see the SCF Reference Manual for L-Series, J-Series, and H-Series RVUs. 2. Stop any running applications, including TMF. For more information, see the applicable documentation for the applications and the TMF Reference Manual. 3. Shut down SCF subsystem processes. For more information, see the applicable online help or manual for the SCF subsystem (for example, SCF Reference Manual for the Storage Subsystem). TIP: There is a list of SCF subsystem manuals in the SCF Reference Manual for L-Series, J- Series, and H-Series RVUs. 4. Log on to the System Startup Application and launch the OSM System Startup tool. a. From the upper left checkbox, select all processors. b. From the Processor Actions drop-down menu, select Halt. c. Click Perform Action. If prompted, confirm the action. 5. Perform a graceful shutdown of the CLIMs. a. From the virtual NSC, use PUTTY to log onto SCLIM000. Alternatively, you can also use the Horizon console. b. Shutdown each CLIM in the system by issuing these commands. 96 Shutting down the vns system and vclims

97 IMPORTANT: Ensure that SCLIM000 is shutdown last. prov %MAINT ssh <climname> clim stop prov %MAINT ssh <climname> clim clearlog prov %MAINT ssh <climname> shutdown h now 6. Shut down the virtual NSC, by powering down the system from Windows. 7. Stop each vns CPU VM individually. VM shutdown can be done using the control plane command line interface (CLI) or the Horizon interface From CLI: a. openstack server list b. Issue the following command for each vns CPU VM: openstack server stop <VM name> From Horizon: a. Log on to the Horizon interface. b. Select the Shut Off Instance action for each vns CPU VM from the instances tab of the Compute menu. Reviewing maximum transmission unit (MTU) in OpenStack An incorrect MTU can result in fragmentation, or on some configurations even packet loss. This example shows the result of checking the MTU on a vns system using the net-show command and the network tab on the horizon interface. Reviewing maximum transmission unit (MTU) in OpenStack 97

98 Figure 7: MTU result in OpenStack 98 vns administrator tasks

99 Troubleshooting vns problems on OpenStack Virtual Machines shut down during deployment on page 99 vns Deployment fails with a timeout waiting for disks to be created on page 99 Collecting vclim crash dumps and debug logs on page 100 vclim is unresponsive at the Horizon console on page 100 Using the vclim serial log to assist with troubleshooting on page 100 Debugging hypervisor issues on page 101 Networking issues for vclims on page 101 Issues with HSS boot, reload, or CPU not being online on page 102 Virtual Machines shut down during deployment Occasionally, one or more of the vns virtual machines may shut down during the deployment process. If one of these virtual machines is a storage CLIM, the deployment process will fail with an error. This is due to the fact that the additional volumes that need to be attached to a storage CLIM cannot be attached when the virtual machine is shut down. Recovery The virtual machines can be restarted using the Start Instance action in Horizon, or by running the openstack server start <server name> CLI command. Any virtual machine that shuts down can be restarted immediately. vns Deployment fails with a timeout waiting for disks to be created The vns deployment tools will wait up to 60 minutes, by default, for disks to be created. If the storage backend takes longer than that to create the disks, the action will fail and the system will be put into an error state. NOTE: If it takes longer than 60 minutes to create all the disks, HPE recommends checking the storage backend for misconfiguration. A misconfiguration could significantly reduce I/O performance of the running vns system. To prevent the vns deployment action from failing, increase the time the tools wait for the disks to be created. Procedure 1. Log on to the control node. If this is a multi-node control plane, run the following steps on each of the control nodes. 2. Edit the /etc/vnonstop/vnonstop.conf file and add the following section and configuration value. The time shown here is 7200 seconds (2 hours). Make sure to set it to a time that is long enough for the disks to be successfully created. [volume_api] Troubleshooting vns problems on OpenStack 99

100 wait_for_volumes_max_time = Restart the vns api service. a. On RHOSP 10, run: systemctl restart vnonstop-api.service b. On Ubuntu, run: service vnonstop-api restart Collecting vclim crash dumps and debug logs The primary tools for capturing debug information from a live vclim are: CLIMCMD <climname> clim abort CLIMCMD <climname> climdebuginfo CLIMCMD <climname> clim onlinedebug vclim is unresponsive at the Horizon console If a vclim is unresponsive to commands issued with CLIMCMD, and you are unable to log into the vclim through the Horizon interface, you can force a vclim to abort and generate a debug archive through the hypervisor. This is equivalent to sending Non-Maskable Interrupt (NMI) to a physical CLIM through the ilo. Procedure 1. Find the compute node hosting the VM and the instance number. root@cpl-comp004 mgmt:~$ nova show VNS1_NCLIM000 grep OS-EXT-SRV-ATTR OS-EXT-SRV-ATTR:host cp1-comp0004-mgmt OS-EXT-SRV-ATTR:hypervisor_hostname cp1-comp0004-mgmt OS-EXT-SRV-ATTR:instance_name instance Issue virsh inject-nmi command to VM directly from the compute node. ssh root@cp1-comp004-mgmt inject-nmi instance Using the vclim serial log to assist with troubleshooting The vclim moves initial boot logs to the virtual serial port during boot. If the vclim fails to boot, this log might provide additional troubleshooting information. To access the log, click VIEW FULL LOG which provides a full history of the serial log since boot. This can be downloaded for attachment to support tickets. TIP: The Console may be quiet during boot, especially when disks are resized during initial boot. 100 Collecting vclim crash dumps and debug logs

101 Figure 8: vclim serial log Debugging hypervisor issues When debugging problems involving the hypervisor, dumping the XML for the virtual machine might be useful for also providing information as well as other hypervisor setup information such as items validated during pre-installation. Networking issues for vclims Tcpdump is the main tool for network troubleshooting. Just like many network problems are switch/ network problems when using virtio_net configurations. Many network vclim problems or OpenStack or vclim configuration problems are related to mismatching IP, MTU, or being blocked by security policies. To troubleshoot virtio_net, simultaneously collect tcpdump of CLIM interface (for example eth1) and its tap interface (for example, tapf72804f7-3d). To find the tap interface. Procedure 1. Find the neutron port ID of interest: neutron port-list grep NCLIM000_MNT_PORT f72804f7-3d60-480c-864c-94be2c1afd73 NCLIM000_MNT_PORT fa:16:3e:62:c0:ed {"subnet_id": "3423f1bc-fc80-4b75-a c682bbc1a", "ip_address": " "} 2. Find and login to the compute node for the vclim: Debugging hypervisor issues 101

102 3. The tap interface contains the first 10 bytes of ID: ip link show grep tapf : tapf72804f7-3d: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast master qbrf72804f7-3d state UNKNOWN mode DEFAULT group default qlen Trace it with tcpdump in the compute node: sudo tcpdump itapf72804f7-3d n TIP: Often this will simply prove that the vclim is sending/receiving what you think it is, and will need OpenStack troubleshooting. See the OpenStack administrator or troubleshooting guides on the Internet. Issues with HSS boot, reload, or CPU not being online This illustration describes how to access the CPU instance on a compute node using the virtual serial port. Figure 9: HSS does not boot, reload issues, CPU not online 102 Issues with HSS boot, reload, or CPU not being online

103 Troubleshooting vns problems on VMware vsphere vcenter server disconnects ESXi hosts from the Datacenter or other license expiration issues cause a disconnect to the Datacenter If you encounter errors such as following during any vrealize Orchestrator workflow execution, [ :14:02.148] [E] Error in (Workflow:Get Data Center / get Datacenter (item1)#3) java.lang.runtimeexception: (vim.fault.invalidlogin) { faultcause = null, faultmessage = null } Or notice the existing Datacenter for the vsphere vcenter Plug-in display an error similar to: Solution To refresh the VM tagging vapi metamodel with the same parameters originally used for importing in the Datacenter, run the "Update a vcenter Server Instance" workflow from the Orchestrator/Workflows/ Library/vCenter/Configuration. Troubleshooting vns problems on VMware 103

104 104 Troubleshooting vns problems on VMware

105 Websites General websites Hewlett Packard Enterprise Information Library Single Point of Connectivity Knowledge (SPOCK) Storage compatibility matrix Storage white papers and analyst reports For additional websites, see Support and other resources. Websites 105

106 Support and other resources Accessing Hewlett Packard Enterprise Support For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website: To access documentation and support services, go to the Hewlett Packard Enterprise Support Center website: Information to collect Technical support registration number (if applicable) Product name, model or version, and serial number Operating system name and version Firmware version Error messages Product-specific reports and logs Add-on products or components Third-party products or components Accessing updates Some software products provide a mechanism for accessing software updates through the product interface. Review your product documentation to identify the recommended software update method. To download product updates: Hewlett Packard Enterprise Support Center Hewlett Packard Enterprise Support Center: Software downloads Software Depot To subscribe to enewsletters and alerts: To view and update your entitlements, and to link your contracts and warranties with your profile, go to the Hewlett Packard Enterprise Support Center More Information on Access to Support Materials page: 106 Support and other resources

107 IMPORTANT: Access to some updates might require product entitlement when accessed through the Hewlett Packard Enterprise Support Center. You must have an HPE Passport set up with relevant entitlements. Customer self repair Hewlett Packard Enterprise customer self repair (CSR) programs allow you to repair your product. If a CSR part needs to be replaced, it will be shipped directly to you so that you can install it at your convenience. Some parts do not qualify for CSR. Your Hewlett Packard Enterprise authorized service provider will determine whether a repair can be accomplished by CSR. For more information about CSR, contact your local service provider or go to the CSR website: Remote support Remote support is available with supported devices as part of your warranty or contractual support agreement. It provides intelligent event diagnosis, and automatic, secure submission of hardware event notifications to Hewlett Packard Enterprise, which will initiate a fast and accurate resolution based on your product's service level. Hewlett Packard Enterprise strongly recommends that you register your device for remote support. If your product includes additional remote support details, use search to locate that information. Remote support and Proactive Care information HPE Get Connected HPE Proactive Care services HPE Proactive Care service: Supported products list HPE Proactive Care advanced service: Supported products list Proactive Care customer information Proactive Care central Proactive Care service activation Warranty information To view the warranty for your product or to view the Safety and Compliance Information for Server, Storage, Power, Networking, and Rack Products reference document, go to the Enterprise Safety and Compliance website: Customer self repair 107

108 Additional warranty information HPE ProLiant and x86 Servers and Options HPE Enterprise Servers HPE Storage Products HPE Networking Products Regulatory information To view the regulatory information for your product, view the Safety and Compliance Information for Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise Support Center: Additional regulatory information Hewlett Packard Enterprise is committed to providing our customers with information about the chemical substances in our products as needed to comply with legal requirements such as REACH (Regulation EC No 1907/2006 of the European Parliament and the Council). A chemical information report for this product can be found at: For Hewlett Packard Enterprise product environmental and safety information and compliance data, including RoHS and REACH, see: For Hewlett Packard Enterprise environmental information, including company programs, product recycling, and energy efficiency, see: Documentation feedback Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hpe.com). When submitting your feedback, include the document title, part number, edition, and publication date located on the front cover of the document. For online help content, include the product name, product version, help edition, and publication date located on the legal notices page. 108 Regulatory information

109 Creating a Virtualized NonStop System console (vnsc) Depending on your virtualized environment, see one of the following. Creating a Virtualized NonStop System console (vnsc) Creating a Virtualized NonStop System (vnsc) for VMware Prerequisites for creating the Virtualized NonStop System Console (vnsc) Prior to creating the vnsc, make sure you have the following. Microsoft Windows Server 2012 R2 license and corresponding ISO file or physical DVD from which you could create an ISO file Downloaded ISO image of virtio drivers for Windows virtual machines running on KVM: NonStop System Console (NSC) Update 27 or later ISO At least 20GB for the empty volume that will hold the installed version of Windows and for creating the image Horizon log on permissions Internet Explorer 11.0 Creating a vnsc Procedure 1. NOTE: If you are creating a vnsc on VMware, the following procedure does not apply to VMware. See Creating a Virtualized NonStop System (vnsc) for VMware. Make sure you have met the Prerequisites for creating the Virtualized NonStop System Console (vnsc) before proceeding. 2. Upload each of the following into Glance as individual images: Microsoft Windows Server 2012 R2 virtio driver ISO from Fedora Project NSC Update 27 or later ISO 3. Create an empty volume to hold the Microsoft ISO. Creating a Virtualized NonStop System console (vnsc) 109

110 4. Create the VM from one of the OpenStack controllers by selecting an NSC flavor and using nova boot. NOTE: If you have multiple networks, nova requires you to select one of those networks and add the <Network ID> to nova boot (as shown in the example). This is not required if you have a single network. a. Select a vnsc flavor with at least 2 cores and 8GB of RAM. For information on flavors, see Flavor management for Virtualized NonStop virtual machines on page 32 b. Use nova boot to launch the vnsc. nova boot\ --flavor <flavor name>\ --image <Windows DVD ISO image ID>\ --block-device id=<empty vol ID>,source=volume,dest=volume,shutdown=preserve \ --block-device id=<virtio driver ISO image ID>,\ source=image,dest=volume,size=1,bus=ide,type=cdrom,shutdown=remove\ --block-device id=<nsc DVD ISO>,\ source=image,dest=volume,size=3,bus=ide,type=cdrom,shutdown=remove\ --nic net-id=<network ID, only necessary if more than one network is present>\ <Instance name> c. Log on to Horizon. From the Instances panel, view the newly created VM. Once the VM is in a Running state, select the instances and the Console tab. 5. Install Microsoft Windows Windows Server 2012 R2. a. Select your language and keyboard input. b. Enter the Microsoft Windows Server 2012 R2 license key. c. Select the Server with a GUI that matches your license type: Windows Standard or Windows Datacenter. d. Select Custom: Install Windows only (advanced). e. Use the Where do you want to install windows dialog to load the VirtIO storage drivers. Click Load driver. f. The Select the driver to install dialog appears. Click Browse. g. Expand the virtio CD drive (by clicking + ). Expand viostor, expand 2k12R2, and select amd64. Click OK. h. Select the Red Hat VirtIO SCSI controller option displayed and click Next. i. Select the Drive 0 Partition and click New to create the partitions. Click Next to continue OS installation. j. When prompted, enter a new password for the Administrator user. 6. Update drivers for the virtio network interface. 110 Creating a Virtualized NonStop System console (vnsc)

111 a. Open Device Manager using Start>Control Panel>Hardware>Device Manager. Click Device Manager. b. Under Other Devices, locate Ethernet controller with a yellow warning icon. Right-click and select Update driver software... c. Click Browse my computer for driver software. Click Browse. d. Navigate to the VirtIO CD drive. Click + to expand NetKVM. Expand 2k12R2 and select amd64. Click OK and click Next. e. When prompted, click Install to install the Red Hat VirtIO Ethernet adapter driver. f. Navigate to the VirtIO CD drive. Click + to expand Balloon. Expand 2k12R2 and select amd64. Click OK and click Next. 7. Update drivers for the VirtIO memory balloon device a. Under Other Devices, locate PCI Device with a yellow warning icon. Right-click and select Update driver software... b. Click Browse my computer for driver software. Click Browse. c. Navigate to the VirtIO CD drive. Click + to expand Balloon. Expand 2k12R2 and select amd64. Click OK and click Next. d. When prompted, click Install to install the VirtIO Balloon Driver. 8. Close Device Manager and the Control Panel. 9. Update Windows settings. a. Enable Remote Desktop from the Server Manager under Local Server. Temporarily disable the Windows Firewall. Start Internet Explorer 11.0 and from the toolbar select Tools>Internet Options: b. In Internet Options, under the Security tab, disable Protected Mode for Local intranet and Trusted sites. c. Add and to the Local intranet site list. d. Add localhost to the Compatibility View settings in Internet Explorer. 10. Install.NET Framework 3.5 features by launching the Server Manager. From the Manage Menu, select Add Roles and Features. a. Click Next at both the Before you Begin page and the Installation Type page. b. Make sure Select a server from the server pool and local server are selected. Click Next. c. Click Next at the Server Roles page. Only check the box next to NET Framework 3.5 Features. Click Next. d. Click Specify an alternate source path link at the bottom of the Confirmation page. Enter D: \Sources\SxS in the Path and click OK. Click Install. 11. Install the NSC DVD 27 or later. Creating a Virtualized NonStop System console (vnsc) 111

112 a. Open the MASTER folder on the NSC DVD image and run Setup.exe. The License Agreement dialog box appears. After accepting the agreement, click Next. The Welcome dialog displays. b. Select the following products: comforte MR-Win6530, OSM Low Level Link, OSM Console Tools, PUTTY, and OpenSSH. Click Next. c. After the products in the previous step finish installing, launch MR-Win6530 and then exit (this prepares the vnsc for automatic CLCI and CNSL launch from the OSM System Startup Tool). 12. Shutdown and delete the Windows VM. 13. Upload the boot drive as an image into Glance. 14. Create vnscs from the boot drive image. NOTE: For a production system, HPE recommends you select the NSC boot volume size of 250 GB to ensure adequate space for future CLIM software update packages, vnsc tool updates, and possibly large volumes of problem data collection in the event of issues with the future operation of your vns system. a. In OpenStack, create a vnsc boot volume in Cinder from this image one boot volume for each vnsc instance to be booted. b. Using Horizon, name the boot volumes for the vnsc name chosen (for example, system2 vnsc1 boot). 15. Launch a vnsc instance (for example, system2 vnsc1) from its boot volume and ensure that you associate the vnsc instance with your previously created maintenance network and some external Operations network (for use with Remote Desktop.) 16. After launching the vnsc instance, connect to the Horizon console to perform the next step before attempting to use Remote Desktop. 112 Creating a Virtualized NonStop System console (vnsc)

113 17. Using the Horizon console window, rename the Windows Server in the Window GUI to your preference. 18. Check IP addresses in Windows for the maintenance LAN and operations LAN. Since there is no requirement for a DHCP server on the vns Maintenance LAN, the Windows operating system is likely to choose a default IP address starting with Change this IP address (and subnet mask) to a static address on your vns Maintenance LAN, such as these used for a physical NonStop System Console: for IP address for subnet mask 19. Set up firewall rules and security software. 20. Resize the local drive from the initial size selected during step 3 (when you created the empty volume to hold the Microsoft ISO) to the production size selected during step 14 using these steps. a. From the desktop of your Windows Server 2012 Cloud Server, open the Server Manager and select Tools > Computer Management. b. In the left pane, under the Storage folder, select Disk Management. The Disk Management left pane displays the current formatted hard drive for your server, generally (C:), and the right pane displays the amount of unallocated space. c. Right-click the C:\\ drive. From the drop-down menu, select Disk Management. The left pane of Disk Management displays the amount of unallocated space. d. Right-click the C:\\ drive. From the drop-down menu, select Extend Volume. The Extend Volume Wizard dialog displays. Click Next. e. The default selection is to add all available space to your C:\\ drive (Disk 0). Click Next to add all available space. Once you see the C:\\ drive expand to the maximum available space, click Finish. f. The additional disk drive volume displays in Computer Management as available to use. g. (Optional) Verify the resizing of the drive worked correctly by loading the Computer Manager from the Server Manager and checking the disk size for the C:\\ drive in Disk Management. Creating a Virtualized NonStop System console (vnsc) 113

114 vns OpenStack CLI commands Command clim network add on page 114 clim network remove on page 115 clim remove on page 116 clim reprovision on page 116 cpu remove on page 117 cpu reprovision on page 117 flavor list on page 118 nsk-volume remove on page 118 nsk-volume reprovision on page 119 system config save on page 120 system create on page 120 system delete on page 124 system expand on page 125 system list on page 128 system reprovision on page 128 system show on page 129 Description Adds a network port to an existing CLIM. Removes a network port from a CLIM. Removes a CLIM from the vns system. Reprovisions a CLIM VM. If requested, also reprovisions the CLIM boot disk. Removes CPUs from the vns system. Reprovisions one or all vns CPUs. Lists the available flavors for vns systems. Removes a NSK volume from the specified system. If the volume is mirrored, both halves of the mirrored pair will be removed. Reprovisions one half of a mirrored pair on the specified vns system. Saves a configuration file to be used as an input file for the OSM System Configuration Tool. Creates a new vns system, using a passed-in YAML file as input. Deletes a vns system. Adds virtual resources for the specified vns system. Lists all vns systems in the current project. Modifies the Expand Node Number or the VLAN ID of the system. Shows details about the specified vns system. clim network add This command adds a network port to an existing CLIM. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop clim network add [-h] 114 vns OpenStack CLI commands

115 Parameter Required or Optional Values Description Required System serial number System serial number of the vns system. Must exist in the current project. --clim-name CLIM name Name of the CLIM where the new network port should be connected. --network-id Network ID UUID of the network where the new port should be created. -h, --help Optional None Shows usage and help for the command. --subnet-id Subnet ID ID of the subnet where the new port should be created. NOTE: Required if ipaddress is specified. If not specified, one of the subnets for the specified network will be used. --ip-address IP address IP address of the new port. If not specified, one of the unallocated IP addresses valid for the specified or selected subnet will be used. --port-type Port type Example output of clim network add command Added the network port from CLIM (NCLIM001) successfully clim network remove This command removes a network port from a CLIM. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop clim network remove [-h] Parameter systemserialnumber systemserialnumber Required or Optional Values Description Required System serial number System serial number of the vns system. Must exist in the current project. --clim-name CLIM name Name of the CLIM where the new network port should be connected. -h, --help Optional None Shows usage and help for the command. --port-id Port ID ID of the port to be removed from the CLIM. Example output of clim remove command clim network remove 115

116 Removed the network port from CLIM (NCLIM001) successfully clim remove This command removes a CLIM from a vns system. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop clim remove [-h] Parameter Required or Optional Values Description Required System serial number System serial number of the vns system. Must exist in the current project. --clim-name CLIM name Name of the CLIM where the new network port should be connected. -h, --help Optional None Shows usage and help for the command Example output of clim remove command Successfully removed CLIM NCLIM003 from system (444445). clim reprovision This command reprovisions a CLIM VM and also reprovisions the CLIM boot disk (if requested). All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop clim reprovision [-h] Parameter systemserialnumber systemserialnumber Required or Optional Values Description Required System serial number System serial number of the vns system. Must exist in the current project. --clim-name CLIM name Name of the CLIM where the new network port should be connected. -h, --help Optional None Shows usage and help for the command. --image-id Image ID If specified, will reprovision the CLIM boot disk using the specified image. If not specified, the existing CLIM boot disk will be reused. --clim-vol-type Cinder type Cinder volume type to be used when reprovisioning the CLIM boot disk. Only valid if --image-id is also specified. If not specified, the volume type of the previous CLIM boot disk will be used. Example output of CLIM reprovision command 116 clim remove

117 CLIM NCLIM001 is being reprovisioned cpu remove This command removes CPUs from the vns system. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop cpu remove [-h] Parameter Required or Optional Values Description Required System serial number System serial number of the vns system. Must exist in the current project. -h, --help Optional None Shows usage and help for the command. --number-of-cpus Even numbers Number of CPUs to remove from the system. Default: 2 Example output of cpu remove command Successfully removed 2 cpus from system (444445) cpu reprovision This command reprovisions one or all vns CPUs. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop cpu reprovision [-h] Parameter systemserialnumber systemserialnumber Required or Optional Values Description Required System serial number System serial number of the vns system. Must exist in the current project. --clim-name CLIM name Name of the CLIM where the new network port should be connected. --network-id Network ID UUID of the network where the new port should be created. -h, --help Optional None Shows usage and help for the command. --new-image-id Image ID UUID of the image to be used to boot the CPU. If not specified, the existing image is used for each CPU. --new-cpu-flavor Flavor name Name of the new flavor to use for the CPU. If not specified, the existing flavor is used for each CPU. --single-cpu 0-15 CPU number to perform the reprovisioning on. If not specified, all CPUs are reprovisioned. cpu remove 117

118 Example output of cpu reprovision command All CPUs are being reprovisioned with cpu flavor: vn.nskcpu_2c16g flavor list This command lists the specific flavors available for vns systems. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop flavor list [-h] Parameter Required or Optional Values Description -h, --help Optional N/A Shows usage and help for the command. -f csv json table value yaml How to format the output. Default: table -c, --column column name Which columns to display. Repeat multiple times to display multiple columns. --max-width integer Maximum display width. --noindent N/A Only applies when using -f json. If specified, will not indent the JSON output. --quote all minimal none nonnumeric Only applies when using -f csv. If specified, will quote the selected items. Default: nonnumeric. Example output of flavor list command ID Name RAM VCPUs ac9a0-e6b0-408e-bbeb-5dd0d37b921e vn.nskcpu d4d d-43c9-bcca ced0d5 vn.nclim_8c d27-265e-40ca-8de9-f7716a30e10b vn.nskcpu_2c16g a697c f6-8ab2-8d2982d1602f vn.sclim_8c nsk-volume remove This command removes an NSK volume from the specified system. If the volume is mirrored, both halves of the mirrored pair will be removed. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop nsk-volume remove [-h] 118 flavor list

119 Parameter Required or Optional Values Description Required System serial number System serial number of the vns system. Must exist in the current project. --volume-name CLIM name Name of the volume to remove. --clim-name CLIM name Name of the CLIM with the volume to remove. -h, --help Optional None Shows usage and help for the command. Example output of nsk-volume remove command Successfully removed DATA10 from SCLIM001 on system (444445) nsk-volume reprovision This command reprovisions an NSK volume from the specified system. If the volume is mirrored, both halves of the mirrored pair will be removed. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop nsk-volume reprovision [-h] Parameter systemserialnumber systemserialnumber Required or Optional Values Description Required System serial number System serial number of the vns system. Must exist in the current project. --volume-name Volume name Name of the volume to be reprovisioned. --clim-name CLIM name Name of the CLIM where this volume is attached. Since only one half of the mirrored pair is reprovisioned at a time, this will identify which half of the mirrored pair to reprovision. -h, --help Optional None Shows usage and help for the command. --image-id Image ID UUID of the image to be copied on to this volume when it is reprovisioned. If not specified, an empty volume will be reprovisioned. --clim-vol-type Volume type Cinder volume type to be used when reprovisioning the volume. If not specified, the same volume type that was used for the original volume will be used. Example output of nsk-volume remove command nsk-volume reprovision 119

120 NSK-volume DSMSCM attached to SCLIM000 is being reprovisioned system config save This command saves a configuration file to be used as an input file for the OSM System Configuration Tool. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop system config save [-h] Parameter systemserialnumber Required or Optional Values Description Required System serial number System serial number of the vns system. Must exist in the current project. --file-name File name Output file (full or relative pathing allowed) for the command. Current user must have write permissions to the location. --meu-password MEU password Password for SCLIM000 and SCLIM clims-password CLIM password Password for all CLIMs. -h, --help Optional None Shows usage and help for the command. Example output of system config save command Successfully created the System Config Tool input file for system (555555): vosm1.cfg system create This command creates a new vns system by using a YAML file as input. NOTE: An example YAML file can be found on the control nodes at /usr/share/vnonstop/systemcreate.yaml. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop system create [-h] Parameter Required or Optional Values Description configuration file Required File name YAML file describing the system to be created -h, --help Optional None Shows usage and help for the command Example output of system create command System is being created. 120 system config save

121 YAML file format for system create Field Type Description system object Top level YAML object YAML system object (system create) Field Type Description name string System name, no leading \ character needed. Must be a valid NonStop system name. serial string Quoted system serial number. Should be 5 or 6 digits long, in single or double quotes. Must match the core license file that will be installed. node_number integer Expand node number. Valid values: system_class string The class of the system. Valid values: High, Entry --vlan_id integer (Optional) Valid values are greater than 0. Default: 2 fabric object Specifies the system configuration for the X and Y fabrics. The fabric field is required for vns systems, except when a fabric uses ConnectX-3 NICs. mlan_network UUID UUID of the network to be used for the maintenance LAN. mlan_subnet UUID UUID of the subnet to be used for the maintenance LAN. ztcp0 IPv4 address IP address on the maintenance LAN to be used for ZTCP0. ztcp1 IPv4 address IP address on the maintenance LAN to be used for ZTCP1. fault_zone integer Desired fault zone isolation option. Valid values: 0 - None 1 - CPUs Only 2 - NonStop Standard primary_az string For fault zone isolation options of 0 or 1, represents the availability zone that all the virtual machines and volumes will be provisioned in. For fault zone isolation option 2, represents where half of the virtual machines and the primary volumes will be provisioned. Table Continued vns OpenStack CLI commands 121

122 Field Type Description mirror_az string For fault zone isolation options of 0 or 1, this is not valid. nsk_cpus object nsk_cpus object. For fault zone isolation option 2, represents where the second half of the virtual machines and the mirror volumes will be provisioned. clim_image UUID UUID of the image to be installed on the boot volume of each CLIM. clim_vol_type string (Optional) Cinder type to be used when creating the boot volumes for the CLIM virtual machines. clims array Array of CLIM objects. At a minimum, must contain entries for 2 storage CLIMs and 2 IP CLIMs. system_disk object system_disk object. nsk_vols array (Optional) Array of nsk_vol objects. If no nsk_vol objects are specified, only the $SYSTEM disk will be created. YAML fabric object (system create) Field Type Description port_type string Type of fabric port to use when provisioning the X and Y fabric ports in Neutron. Valid values: direct creates an SR-IOV port. direct-physical creates a passthrough port. xfabric_net UUID Neutron network that provisions the X-fabric port. The xfabric_net field cannot be the same as the yfabric_net field. yfabric_net UUID Neutron network that provisions the Y-fabric port. The yfabric_net field cannot be the same as the xfabric_net field. YAML nsk_cpus object (system create) 122 vns OpenStack CLI commands

123 Field Type Description number integer Number of CPUs for the system. Values: Even numbers 2-16 flavor string Flavor name to be used when creating the CPU virtual machines. image UUID UUID of the image to be used to boot the CPU virtual machines. YAML CLIM object (system create) Field Type Description name string CLIM name. Must be a valid NSK CLIM name. type string Type of CLIM. Values: Storage, IP, Telco flavor string Flavor name to be used when creating this CLIM virtual machine. eth0 IPv4 address IP address to be used on the maintenance LAN for this CLIM. networks array (Optional) Array of network objects specifying the additional networks to attach to this CLIM. Storage CLIMs can have up to 2 additional networks. IP and Telco CLIMs can have up to 5 additional networks. YAML network object (system create) Field Type Description id UUID UUID of the network to be attached. subnet UUID attached. If no subnet is specified, the port creation will not request a specific subnet. Required if ipaddress is specified. ipaddress IP address (Optional) IP address to be used for this network port. Must be a valid and available IP address on the specified subnet. Requires that the subnet field is also specified. YAML system disk object (system create) vns OpenStack CLI commands 123

124 Field Type Description size integer This is the minimum size in GB for the $SYSTEM disk object. The minimum size required is 100 GB. image UUID Image to be copied onto the volume when it is created. primary_clim string Name of the primary CLIM where the system disk should be attached. mirror_clim string Name of the mirror CLIM where the system disk should be attached. type string (Optional) Cinder volume type to be used when creating the $SYSTEM disk. YAML nsk_vol object (system create) Field Type Description name string NSK name for the volume. Used in naming the volume for easy identification in OpenStack. size integer Size for the volume, in GB. primary_clim string Name of the CLIM where the primary disk should be attached. mirror_clim string (Optional) Name of the CLIM where the mirror disk should be attached. If not specified, an unmirrored disk will be created. type string (Optional) Cinder volume type to be used when creating this disk. system delete This command deletes all of the virtual resources for a vns system. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop system delete [-h] Parameter systemserialnumber Required or Optional Values Description Required System serial number System serial number of the vns system to be deleted. Must exist in the current project. -h, --help Optional None Shows usage and help for the command. --force Optional N/A If specified, forces the deletion of the vns system, even if the system is not in the OK or ERROR states. 124 system delete

125 Example output of system delete command System is being deleted. system expand This command adds CPUs, CLIMs, and/or NSK volumes to an existing vns system by using a YAML file as input. NOTE: All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop system expand [-h] Parameter An example YAML file can be found on the control nodes at /usr/share/vnonstop/systemexpand.yaml. systemserialnumber Required or Optional Values Description Required System serial number System serial number of the vns system to be expanded. Must exist in the current project. --input-file Required YAML input file File that contains the new CPU, CLIM, and/or NSK volume information to be added to the vns system. -h, --help Optional None Shows usage and help for the command. Example output of system expand command System is being expanded. YAML file format for system expand Field Type Description system object Top level YAML object YAML system object (system expand) Field Type Description nsk_cpus object nsk_cpus object clim_image UUID (Optional) UUID of the image to be installed on the boot volume of each CLIM. Is required is required if any entries are present in the clims array. Table Continued system expand 125

126 Field Type Description clim_vol_type string (Optional) Cinder type to be used when creating the boot volumes for the CLIM virtual machines. clims array Array of clim objects to be added to the vns system. nsk_vols array (Optional) Array of nsk_vol objects to be added to the system. Is required if any new storage CLIMs are added, as each storage CLIM requires at least one attached NSK volume. Can contain NSK volumes that are attached to existing CLIMs or newly added CLIMs, as long as the total number of attached volumes to any one CLIM is not larger than 25. YAML nsk_cpus object (system expand) Field Type Description number integer Number of CPUs to add to the vns system. Values: Even numbers starting from 2. Total configuration cannot exceed 16 CPUs. flavor string (Optional) Flavor name to be used when creating the CPU virtual machines. If not specified, uses the same flavor name as CPU 0. image UUID (Optional) UUID of the image used to boot the CPU virtual machines. If not specified, uses the same image CPU 0 YAML CLIM object (system expand) Field Type Description name string CLIM name. Must be a valid NSK CLIM name. type string Type of CLIM. Values: Storage, IP, Telco Table Continued 126 vns OpenStack CLI commands

127 Field Type Description flavor string Flavor name to be used when creating this CLIM virtual machine. eth0 IPv4 address IP address to be used on the maintenance LAN for this CLIM. networks array (Optional) Array of network objects specifying the additional networks to attach to this CLIM. Storage CLIMs can have up to 2 additional networks. IP and Telco CLIMs can have up to 5 additional networks. YAML network object (system expand) Field Type Description id UUID UUID of the network to be attached. subnet UUID (Optional) UUID of the subnet to be attached. If no subnet is specified, the port creation will not request a specific subnet. Required if ipaddress is specified. ipaddress IP address (Optional) IP address to be used for this network port. Must be a valid and available IP address on the specified subnet. Requires that the subnet field is also specified. YAML nsk_vol object (system expand) Field Type Description name string NSK name for the volume. Used in naming the volume for easy identification in OpenStack. size integer Size for the volume, in GB. image UUID Image to be copied onto the volume when it is created. primary_clim string Name of the CLIM where the primary disk should be attached. Table Continued vns OpenStack CLI commands 127

128 Field Type Description mirror_clim string (Optional) Name of the mirror CLIM where the system disk should be attached. If not specified, an unmirrored disk is created. type string (Optional) Cinder volume type to be used when creating this disk. system list This command lists all of the vns systems in the current project. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop system list [-h] Parameter Required or Optional Values Description -h, --help Optional N/A Shows usage and help for the command. -f csv json table value yaml How to format the output. Default: table -c, --column column name Which columns to display. Repeat multiple times to display multiple columns. --max-width integer Maximum display width. --noindent N/A Only applies when using -f json. If specified, will not indent the JSON output. --quote all minimal none nonnumeric Only applies when using -f csv. If specified, will quote the selected items. Default: nonnumeric. Example output of system list command system reprovision This command reprovisions a CLIM VM and also reprovisions the CLIM boot disk (if requested). All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. 128 system list

129 vnonstop system reprovision [-h] Parameter Required or Optional Values Description Required System serial number System serial number of the vns system to be reprovisioned. Must exist in the current project. -h, --help Optional None Shows usage and help for the command New expand node number. Valid values: new-vlan-id >0 New VLAN ID for the system to use. Example output of system reprovision command System (444445) is being reprovisioned with expand node number: 254 and VLAN ID: 10 system show This command shows the details for a specified vns system. All command parameters are described in the following table. To view the syntax and usage for this command, use the help parameter. vnonstop system show [-h] <system-serial-number> Parameter systemserialnumber --new-expand-nodenumber systemserialnumber Required or Optional Values Description Required System serial number System serial number of the vns system. Must exist in the current project. -h, --help Optional None Shows usage and help for the command. Example output of system show command System Name: VOSM1 SSN: Expand Node Number: 235 CPUs: VOSM1-CPU00: flavor = vn.nskcpu, image = 7a3776e7-2e13-487a-843b-b3d3b4282e38 VOSM1-CPU01: flavor = vn.nskcpu, image = 7a3776e7-2e13-487a-843b-b3d3b4282e38 Storage CLIMS: VOSM1-SCLIM001: flavor = vn.sclim_8c VOSM1-SCLIM000: flavor = vn.sclim_8c IP CLIMS: VOSM1-NCLIM000: flavor = vn.nclim_8c VOSM1-NCLIM001: flavor = vn.nclim_8c NSK Volumes: $SYSTEM: size = 100, clims: VOSM1-SCLIM001, VOSM1-SCLIM000 $DSMSCM: size = 50, clims: VOSM1-SCLIM001, VOSM1-SCLIM000 $AUDIT: size = 50, clims: VOSM1-SCLIM001 system show 129

130 Horizon interface for vns Project Dashboard for vns systems on page 130 System Details on page 131 Admin Dashboard for vns systems on page 135 Launch System workflow on page 136 Expand System workflow on page 136 Reprovision CLIM workflow on page 136 Reprovision CPU workflow on page 136 Reprovision Volume workflow on page 136 Remove CLIM workflow on page 136 Remove CPU workflow on page 137 Remove Volume workflow on page 137 Delete System action on page 137 Create Flavor workflow on page 137 Provides the system panel display. Displays detailed information about the selected system and lets you view Overview details, CPU details, CLIM details, and NSK volume details. Removes a CLIM from the vns system. Creating a new vns system. Adding a resource (CPU, CLIM, Volume) to an existing vns system. Reprovisioning a CLIM in an existing vns system. Reprovisioning a CPU in an existing vns system. Reprovisioning a volume in an existing vns system. Removing a CLIM in an existing vns system. Removing a CPU in an existing vns system. Removing a volume in an existing vns system. Deleting a system including all resources for that system. Creating a flavor for a CPU or CLIM. TIP: The workflows provide detailed online help. Click? to launch the online help within the workflows. Project Dashboard for vns systems The system panel displays a list of the vns systems in the current project. Selecting a system listed under System Name (for example, VOSM1) displays the detailed attributes for that system. For more information, see Horizon vns System List attributes. 130 Horizon interface for vns

131 Table 2: Horizon System List attributes Attribute Description System Name System name, hyperlinked to the system details panel. System Serial Number System serial number, as entered when creating the system. Expand Node Expand Node Number. Values: System Class Logical Processors Storage CLIMs IP CLIMS Telco CLIMs System Status Task Status Actions High or Entry Number of logical processors for the system. Number of storage CLIMs for the system. Number of IP CLIMs for the system. Number of Telco CLIMs for the system. State of the system. Values: OK, DOWN, ERROR State of the current or last run task. Values: OK, BUILDING, DELETING, EXPANDING, REPROVISIONING, FAILED. Task Status is supported on L17.08 and later. List of actions: Expand System Reprovision CPU Reprovision CLIM Reprovision Volume Remove CPUs Remove CLIM Remove Volumes Delete System NOTE: The Expand action is supported on L17.08 and later. System Details The Horizon system details panel displays detailed information about the selected vns system using these tabs: System Details 131

132 Overview Tab on page 132 CPUs Tab on page 133 CLIMs Tab on page 134 Volumes Tab on page 135 Overview Tab Displayed attributes Table 3: Overview Tab attributes (Horizon System Details) Attribute Description System Name System name. Hyperlinked to the system details panel. SSN System serial number, as entered when creating the system. Expand Node Number Expand Node Number. Values: System Status Task Status Task Status Message System Class CPUs Storage CLIMs State of the system. Values: OK, DOWN, ERROR State of the current or last run task. Values: OK, BUILDING, DELETING, EXPANDING, REPROVISIONING, FAILED If there is a message for the current task state, it will be displayed. If not, this attribute will not be displayed. High or Entry Number of logical processors for the system. Number of storage CLIMs for the system. Table Continued 132 Overview Tab

133 Attribute IP CLIMs Telco CLIMs Description Number of IP CLIMs for the system. Number of Telco CLIMs for the system. Only displayed if Telco CLIMs were created for the system. CPUs Tab Displayed attributes Table 4: CPUs Tab attributes (Horizon System Details) Attribute CPU Name Flavor Image Status Created Updated Description The name of the CPU virtual machine will be the heading for the table with the details for that particular CPU. Flavor for the CPU. Glance ID of the HSS image used to boot the CPU. Status of the CPU virtual machine. Creation date and time of the CPU virtual machine. Last time the CPU virtual machine was updated. CPUs Tab 133

134 CLIMs Tab Displayed attributes Each CLIM type (Storage, IP, and Telco) has a section containing detail tables for each CLIM. Table 5: CLIMs Tab attributes (Horizon System Details) Attribute CLIM Name Status IP Address #Attached Volumes Attached Volume Names Created Updated Description The name of the CLIM virtual machine will be the heading for the table with the details for that particular CLIM. Status of the CPU virtual machine. List of IP addresses used for the ports on this CLIM. Number of attached volumes. Only valid for storage CLIMs. Comma separated list of the volumes attached to this CLIM. Only valid for Storage CLIMs. Creation date and time of the CLIM virtual machine. Last time the CLIM virtual machine was updated. 134 CLIMs Tab

135 Volumes Tab Displayed attributes Table 6: Volumes Tab attributes (Horizon System Details) Attribute Volume Name Size CLIMs Description The name of the NSK volume will be the heading for the table with the details for that particular volume. Disk size, in gigabytes. CLIM or CLIMs that the volume is attached to. Unmirrored volumes will display a single CLIM, while mirrored volumes will display two CLIMs. Admin Dashboard for vns systems Flavors Panel The flavors panel displays a list of flavors that have been created for use by vns systems and allows an administrator to manage those flavors. For more information, see Flavor management for Virtualized NonStop virtual machines on page 32. Displayed Attributes Table 7: Flavor List attributes (Horizon Admin Details) Attribute Flavor Name Type Description Displays the name of the flavor. Type of the flavor. Values: CPU, Storage CLIM, IP/Telco CLIM. Table Continued Volumes Tab 135

136 Attribute RAM VCPUs Description Amount of memory associated with the flavor. Number of virtual CPUs associated with the flavor. CPU flavors use cores and Storage CLIM and IP/Telco CLIM flavors use cores and hyperthreads. Launch System workflow The launch system workflow is run by clicking Launch System above the system list table on the System Panel. A new dialog launches to guide you through creating a vns system. Each setting on the left side of the Create Virtualize NonStop System dialog (for example, Details) provides detailed online help. Click? to launch the online help. Expand System workflow The expand system workflow is run by clicking Expand System in the system list panel for vns systems. A new dialog launches to guide you through adding resources to an existing vns system. Each setting on the left side of the Expand Virtualized NonStop system dialog (for example, NSK CPUs ) provides detailed online help. Click? to launch the online help. Reprovision CLIM workflow The reprovision CLIM workflow is run by clicking Reprovision CLIM in the dropdown action list of the system list panel for vns systems. A new dialog launches to guide you through reprovisioning a CLIM in an existing vns system. Select Details on the left side of the Redeploy Virtualized NonStop CLIM dialog. Click? to launch the online help. Reprovision CPU workflow The reprovision CPU workflow is run by clicking Reprovision CPU in the dropdown action list of the system list panel for vns systems. A new dialog launches to guide you through reprovisioning a CPU in an existing vns system. Select Details on the left side of the Reprovision Virtualized NonStop CPUs dialog. Click? to launch the online help. Reprovision Volume workflow The reprovision volume workflow is run by clicking Reprovision Volume in the dropdown action list of the system list panel for vns systems. A new dialog launches to guide you through reprovisioning a volume in an existing vns system. Select Details on the left side of the Reprovision Virtualized NonStop Volumes dialog. Click? to launch the online help. Remove CLIM workflow The remove CLIM workflow is run by clicking Remove CLIM in the dropdown action list of the system list panel for vns systems. A new dialog launches to guide you through removing a CLIM in an existing vns system. 136 Launch System workflow

137 Select Details on the left side of the Delete Virtualized NonStop Components dialog. Click? to launch the online help. Remove CPU workflow The remove CPU workflow is run by clicking Remove CPU in the dropdown action list of the system list panel for vns systems. A new dialog launches to guide you through removing a CPU in an existing vns system. Select Details on the left side of the Delete Virtualized NonStop Components dialog. Click? to launch the online help. Remove Volume workflow The remove volume workflow is run by clicking Remove Volume in the dropdown action list of the system list panel for vns systems. A new dialog launches to guide you through removing a volume in an existing vns system. Select Details on the left side of the Delete Virtualized NonStop Components dialog. Click? to launch the online help. Delete System action The delete system action is run by clicking Delete System in the dropdown action list of the system list panel for vns systems. A confirmation dialog is displayed requiring the administrator to confirm deletion of all virtual resources that comprise the system. Create Flavor workflow The remove CPU workflow is run by clicking Reprovision CPU in the dropdown action list of the system list panel for vns systems. A new dialog launches to guide the administrator through reprovisioning or all CPUs in an existing vns system. Select Flavor Type and Flavor Details on the left side of the Create Virtualized NonStop Flavor. Click? to launch the online help for these selections. Remove CPU workflow 137

138 Using ELK for NonStop and Virtualized NonStop event logs This appendix describes how to setup an initial development ELK environment running on a Linux VM or a single Linux server running Ubuntu L16.04 LTS. All procedures describe setting up ELK processes on the single Linux server. If you require a more advanced ELK cloud deployment, adjust the procedures in this appendix accordingly. For instructions for ELK setup in a Red Hat Linux environment such as RHOSP 10.0, see the recommended documentation in Mandatory prerequisites for setting up ELK. NonStop events and ELK Sending NonStop events to an Elasticsearch, Logstash, and Kibana (ELK) environment for monitoring and analysis is now supported on vns, HPE Integrity NonStop X systems, and HPE Integrity NonStop i systems. ELK is a collection of three open source tools: Elasticsearch, Logtash, and Kibana. This illustration summarizes some of the features of these tools. To send NonStop events to Logstash in the ELK environment, you use XYGATE Merged Audit (XMA) to set up XMA filters. For more information, see: Requirements and tested environment Mandatory prerequisites for setting up ELK Considerations for vns, ELK, and OpenStack environment The vns systems run in an OpenStack or VMware environment. You can also monitor and diagnose OpenStack control nodes and compute nodes by forwarding Linux server events and OpenStack events to the same or a different ELK environment as the one monitoring a vns system or systems. The open source Filebeat product efficiently forwards events from Linux and OpenStack logs to the ELK server environment. 138 Using ELK for NonStop and Virtualized NonStop event logs

139 If you use a single environment for analyzing all events, you may need to set up a bridge and firewall with an IP port open to deliver events across a network boundary. Requirements and tested environment for ELK setup Software and setup requirements This table uses system as a collective term for NonStop i, NonStop X, or vns systems. Required for ELK setup XyGate Merged Audit (XMA) Pathway, SQL, TMF-audited volume Java 8.0 An open TCP port for listening for forwarded events from XMA on the system Privileged access (sudo user or root login) on Linux server Linux server memory and free disk space Description A recent version of XMA is required on the system. These are required for the XMA database on the system. Required for Elasticsearch and Logstash. Required for Logstash. Port 5514 is recommended but any unused port number greater than 1024 can be used. Required to perform some procedures. A single Linux server setup requires: 16 GB memory (more is recommended) 200 GB of free disk space A full production environment requires more resources. Two ELK environments for vns This is only required when the OpenStack control plane is isolated from the operations plane and does not have ports open to forward events. Tested environment for ELK setup The ELK setup in this appendix describes an initial development ELK environment, running on a single Linux server or Linux virtual machine. This is the tested environment for that ELK setup. Environment included XyGate Merged Audit (XMA) Tested ELK ELK version 5.5 Linux server Java Web browser XMA version 2.44 running L17.08 RVU Single Linux server running Ubuntu L16.04 LTS OpenJDK Java version 1.8.0_131 Chrome and Firefox provided best results for Kibana Requirements and tested environment for ELK setup 139

140 Mandatory prerequisites for setting up ELK Verify You have reviewed the Requirements and tested environment for ELK setup. You have planned the ELK environment. These instructions describe ELK setup for a single server environment and do not provide detailed planning for larger environments. You have reviewed Elasticsearch 5.5 installation documentation at: guide/en/elasticsearch/reference/current/install-elasticsearch.html TIP: Keep the Elasticsearch link open or bookmark it. You will need the Elasticsearch documentation for ELK package setup, installation, and configuration. Do the same for other documentation recommended in this checklist. A recent version of XYGATE Merged Audit (XMA) is installed. If not, see the XYGATE Merged Audit Reference Manual for details about how to run the iwizard installation macro. For RHOSP, you have the Red Hat OpenStack Platform 10 Advanced Overcloud Customization manual from the RedHat website. You need the instructions for "Centralized Logging" in Chapter 12 of that manual during ELK Installation and Configuration. The XMA TACL segment is attached to your current TACL session and XMA is started. run XYGATEMA.XMA INSTALL xma_pwcold If maintenance is performed on the XMA environment on a regular basis, add the following to the TACLCSTM file to attach the TACL segment. RUN $<vol>.<subvol>.xma INSTALL Where vol.subvol is the location of the XMA installation on the NonStop system on which Pathway is running. The XMA syslog sender is configured in XMA. If not, from a TACL prompt in xma_manager: 1. Select 2 at the Main Menu Selection? prompt to display the Movers Management Menu. 2. From the Movers Management Menu, select 18 Configure SysLog Send server. 3. Enter R(un) to execute the task. Table Continued 140 Mandatory prerequisites for setting up ELK

141 Verify You have a filter for sending EMS events added to the XYPROMA.FILTERS file. The new filter should format the information into either Common Event Format (CEF) or syslog format. For creating a CEF filter, see the XYGATE Merged Audit Reference Manual for information on using the Log Adapter FILTERs (LAF) macro to create a CEF filter. After installing the CEF filter, you must change IPALERT_MSGDELIMITER from CR to LF for Logstash to parse the events properly. For creating your own syslog formatted filter, see Code Example syslog-type filter on page 147. You have Java 8 installed on the Linux server hosting ELK by issuing the following at a bash shell prompt: java -version On Ubuntu, if Java 8 is not installed, you must install it by issuing: sudo apt-get install openjdk-8-jdk On RedHat, if Java 8 is not installed, install it by issuing: sudo yum install java openjdk For more information, see the Elasticsearch 5.5 documentation mentioned above. Elastic.co is present in the list of sources used by your package manager distribution tool. If present, proceed to the next checklist item. If not, you must set up the tool to obtain ELK components from elastic.co as follows: Add the Elasticsearch public signing key to Ubuntu Linux (package manager is apt using the debian format): wget -qo - sudo apt-key add - Add elastic.co repository to a file in the directory of sources used by apt package manager: echo "deb stable main" sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list For more information on RedHat or Ubuntu Linux installation, see the Elasticsearch 5.5 documentation mentioned above. ELK Installation and Configuration IMPORTANT: Ensure that you have completed the Mandatory prerequisites for setting up ELK on page 140. This checklist describes ELK setup on Ubuntu. For ELK setup on RedHat OSP, see the Elasticsearch 5.5 documentation listed in the first row of this table. ELK Installation and Configuration 141

142 To install and configure ELK, complete the items in this checklist On Ubuntu, install Elasticsearch by issuing: sudo apt-get update && sudo apt-get install elasticsearch For ELK installation and configuration instructions or for assistance in troubleshooting package manager errors on Ubuntu or RedHat Enterprise Linux, see the documentation at Elasticsearch: Install Kibana by issuing: sudo apt-get install kibana Install Filebeat on the ELK server by issuing: sudo apt-get update && sudo apt-get install filebeat NOTE: Filebeat forwards and centralizes logs and parses those logs into fields that Elasticsearch understands making it easier to search data. Install Logstash by issuing: sudo apt-get install logstash Configure Elasticsearch For an initial development environment, Elasticsearch does not require additional configuration beyond the default configuration. However you may want to adjust the heap memory limits for Java. At the Linux shell prompt, issue the following command to see the configuration files. ~$ sudo ls /etc/elasticsearch elasticsearch.yml jvm.options log4j2.properties scripts To adjust Java memory, see Adjusting the Java heap memory on page 146. Table Continued 142 Using ELK for NonStop and Virtualized NonStop event logs

143 To install and configure ELK, complete the items in this checklist Configure Kibana: The Kibana configuration file is located in: /etc/kibana/kibana.yml There are two items within this file you may want to edit so that you can connect Kibana with a web browser on a different machine than the server hosting Kibana. To do so, change the values of: 1. server.host to the IP address or DNS name for the preferred network interface of the server. 2. server.port to a IP port number that is not used by your server, but is less well-known than the default port 5601 NOTE: You can select an unused port other than the default port 5601 if you have security concerns about a well-known port. For a more complete security solution including per-user access, you must install an additional product such as the for-fee product X-Pack from elastic.co, or select another solution. Start the Elasticsearch and Kibana services by issuing: ~$ sudo systemctl start elasticsearch.service Verify the services are running by issuing: ~$ sudo ps -aux grep elasticsearch ~$ sudo ps -aux grep kibana To verify if Kibana started successfully, review the log files in /var/log/kibana/ If you need to start or stop Kibana, issue: sudo systemctl start kibana.service sudo systemctl stop kibana.service If Kibana does not start successfully, see Verify Elasticsearch and Kibana installation. Table Continued Using ELK for NonStop and Virtualized NonStop event logs 143

144 To install and configure ELK, complete the items in this checklist Configure Logstash on the ELK server: 1. Using sudo acccess, create this file (do not save the file yet): /etc/logstash/conf.d/logstash-xma.conf 2. Before saving the logstash-xma.confile, edit it as follows. This file is for events from the vns or NonStop system. The filter section renames some fields and removes unused fields for EMS events. input { tcp { } port => 5516 codec => "cef" tcp { port => 5514 type => "syslog-xma" } } filter { # for EMS events in CEF format, remove unused fields. if [devicefacility] == "EMS" { mutate { add_field => { "tags" => "EMS-CEF" } rename => { "devicecustomdate1" => "eventgentime" } rename => { "devicecustomstring2" => "eventdistributor" } rename => { "devicecustomstring3" => "processid" } rename => { "devicecustomstring5" => "terminalname" } rename => { "deviceeventcategory" => "eventnumber" } rename => { "filetype" => "subsysid" } rename => { "devicecustomnumber1label" => "xmaalertinfo" } # For EMS destinationhostname, destinationuserid, and # destinationusername are duplicates of source* fields. # The eventoutcome field is not related to the EMS event, # it is the database access outcome. # The severity field generated by XMA does not come from # the EMS event. remove_field => [ "destinationhostname", "destinationuserid", "destinationusername", "devicecustomstring1", "devicecustomstring4", "externalid", "port", "filename", "reason", "destinationaddress", "devicecustomstring6", "sourceaddress", "sourcednsdomain", "eventoutcome", "severity", "devicecustomnumber1" ] } } } output { elasticsearch { hosts => ["localhost:9200"] } } Table Continued 144 Using ELK for NonStop and Virtualized NonStop event logs

145 To install and configure ELK, complete the items in this checklist NOTE: If you have a vns system installed on RHOSP 10.0 or later, skip the next checklist item and follow the instructions for "Centralized Logging" in Chapter 12 of the Red Hat OpenStack Platform 10 Advanced Overcloud Customization manual at the RedHat website. On Ubuntu, save the file. In the same directory as the file, create a second file for handling the Filebeat protocol with the following name. This file is for openstack events. /etc/logstash/conf.d/filebeat.conf input { beats { port => 5044 } } filter { # For openstack events, copy the timestamps from the event message # field to a new field called generationtime. # Depending on what type of openstack log this event came from, the # timestamp format could be "yyyy-mm-dd hh:mm:ss" or # "MMM dd hh:mm:ss", or the event could start with an IP address # followed by the date like this: # [12/Jul/2017:15:44: ]. if "openstack" in [tags] { grok { match => {"message"=>"%{timestamp_iso8601:generationtime}"} } } } if "_grokparsefailure" in [tags] { grok { remove_tag => ["_grokparsefailure"] match => { "message"=>"%{ip}(\s*\-\s*\-\s*\[)%{httpdate:generationtime}" } } } if "_grokparsefailure" in [tags] { grok { remove_tag => ["_grokparsefailure"] match => {"message"=>"\[%{httpderror_date:generationtime}"} } } output { elasticsearch { hosts => "localhost:9200" } } Table Continued Using ELK for NonStop and Virtualized NonStop event logs 145

146 To install and configure ELK, complete the items in this checklist TIP: If NonStop events do not begin to appear after setting up XMA export on the NonStop server, you can use the Rubydebug statement in the output filter, then temporarily run Logstash from a terminal shell session to see whether parsing errors have occurred. Use of Rubydebug with Logstash is discussed in web blogs and documents on the Internet. After the Logstash configuration is working properly, remove the rubydebug statement and start Logstash as a service: sudo systemctl start logstash.service On your NonStop node, start the XMA CEF output to the Logstash server IP address and selected port number. From the Management tab in Kibana: 1. Click Index Patterns 2. Add logstash-* 3. Perform searches on CEF in the Kibana user interface and your web browser with logstash-* selected. Optional: On Ubuntu, install, configure, and start Filebeat on OpenStack nodes. For RHOSP 10, see the centralized logging with export to Elasticsearch instructions and associated link listed above. 1. Add the Elasticsearch signing key and install Filebeat just as you did on the ELK server. 2. Logged on as root or using sudo, edit the configuration file: /etc/filebeat/ filebeat.yml. 3. Copy the filebeat.yml on your ELK server node. See "Edit the Configure Filebeat on the ELK server node and send events to the local Elasticsearch process". 4. Start the filebeat service: systemctl start filebeat.service On Ubuntu, verify Filebeat is running on the OpenStack node by issuing: ~$ sudo ps -aux grep filebeat Adjusting the Java heap memory Elastic.co recommends using half the available memory on the Linux server, rather than the 2 gigabytes configured by the jvm.options installed by the default elasticsearch package. For initial development testing, 4 gigabytes or more, should suffice. Inside jvm.options, you can edit the -Xms and -Xmx entries. 146 Adjusting the Java heap memory

147 NOTE: The -Xms and -Xmx entries must have identical values. As usual on Linux, lines starting with # are comments. You will need sudo or root access to edit the configuration. # Xms represents the initial size of total heap space # Xmx represents the maximum size of total heap space -Xms4g -Xmx4g Code Example syslog-type filter This is an example of a filter that you could create and add if you do not want to use the Common Event Format (CEF) filter. FILTERDEFBEGIN $SYSLOGQ-EMS-EVENTS-ELK!= This FILTER selects all EMS records. STATUS ACTIVE! $SYLOGQ-EMS-EVENTS-ELK! Uncomment the next line if you want to perform the defined! actions for this filter, then continue to check for another! filter that matches the selection criteria.! EVALUATE_MSG CONTINUE MOVER_BEGIN MOVER_SELECT_BEGIN PRODUCT = EMS MOVER_SELECT_END MOVER_END DATA_BEGIN DATA_SELECT_BEGIN FILTERTYPE NOFILTER DATA_SELECT_END DATA_END ACTIONCOLL_BEGIN ACTION_BEGIN ACTIONTYPE SYSLOGQ IPALERT_ADDRESS <ELK server ip address> IPALERT_MSGDELIMITER LF!If logging to other systems:! - The IPALERT_PORT is usually 514, but can be anything IPALERT_PORT 514 ALERTSTRING (AUDIT.RECORDLCT) ALERTSTRING (AUDIT.OPERATION)! EMS-EVENT ALERTSTRING (SESSION.CLIENTPROGRAM)! generating program ALERTSTRING (AUDIT.TERMINAL) ALERTSTRING (SESSION.PROCESSTHREADID)! process descriptor ALERTSTRING (SESSION.PROCESSTHREADID2) ALERTSTRING (AUDIT.SUBJECTLOGIN) ALERTSTRING (AUDIT.SUBJECTSYSTEM) ALERTSTRING (AUDIT.SEVERITY)! 1=info, 3=critical ALERTSTRING (AUDIT.OBJECTTYPE)! SSID ALERTSTRING (AUDIT.MESSAGEID)! Event number ALERTSTRING (INSTALL.LOCATION)! EMS distributor ALERTSTRING (AUDIT.RESULT)! EMS event text ACTION_END ACTION_BEGIN ACTIONTYPE SETDATA Code Example syslog-type filter 147

148 AUDIT.USER_DATA syslog alert ACTION_END!= Add other desired ACTIONs here ACTIONCOLL_END FILTERDEFEND On the ELK server you can now modify the LogStash configuration to expect EMS events in this format and parse them into whatever fields you like. TIP: You may need to restart XMA for modifications to take effect. The following PATHCOM command can be useful to make sure all the XMA Movers are running, and to see if they have had any errors. After entering the command, ensure that you enter status server * at the = prompt. Or you can run status $xma at the TACL prompt. PATHCOM $XMA =status server * SERVER #RUNNING ERROR INFO CLA-VOSM EMS-VOSM HKEEPER 1 SFG-VOSM SLSENDER 1 XTR-VOSM = Verify Elasticsearch and Kibana installation You can verify the Kibana and Elasticsearch installation by using Filebeat to send events from the syslog file on the ELK server to Elasticsearch. 148 Verify Elasticsearch and Kibana installation

149 Use Filebeat to test the Elasticsearch and Kibana installation Configure Filebeat on the ELK server node and send events to the local Elasticsearch process: Logged on as root or using sudo, edit the configuration file: /etc/filebeat/filebeat.yml. In the Prospectors section of the configuration file, ensure the line below is uncommented (text is bolded for emphasis) so that local syslog data is collected. NOTE: You can use different prospectors for various configurations. Filebeat uses prospectors to locate and process files. For the prospector specific configuration for vns, see Code Example Prospector configuration for Virtualized NonStop running in an Ubuntu OpenStack cloud on page 150. In the ELK server's filebeat.yml configuration file, make sure the output to Elasticsearch is not commented out: # Elasticsearch output output.elasticsearch: # Array of hosts to connect to. hosts: ["localhost:9200"] In the filebeat.yml configuration file, leave lines for Logstash output commented out: # Logstash output #output.logstash: # The Logstash hosts #hosts: ["localhost:5044"] Table Continued Using ELK for NonStop and Virtualized NonStop event logs 149

150 Use Filebeat to test the Elasticsearch and Kibana installation Start and verify Filebeat on the ELK server node sudo systemctl start filebeat.service ~$ sudo ps -aux grep filebeat Wait five minutes and view Kibana using a web browser on your PC. TIP: Chrome or Firefox are recommended. 1. Use the URL you created when you configured Kibana: The Kibana user interface appears. 2. Create a filebeat-* index when prompted. a. In Index name, enter filebeat-* b. In Time-field name, c. Click Create. 3. Perform a few searches. Specify a time period and use an * to perform wildcard searches. TIP: If searches are not successful, you may need to create some events in the syslog. Code Example Prospector configuration for Virtualized NonStop running in an Ubuntu OpenStack cloud filebeat.prospectors: # Each - is a prospector. Most options can be set at the prospector level, so # you can use different prospectors for various configurations. # Below are the prospector specific configurations. - input_type: log # Paths that should be crawled and fetched. Glob based paths. paths: - /var/log/*.log #- c:\programdata\elasticsearch\logs\* exclude_files: [".gz$"] - input_type: log paths: - /var/log/keystone/* exclude_files: [".gz$"] fields: 150 Code Example Prospector configuration for Virtualized NonStop running in an Ubuntu OpenStack cloud

151 tags: ["keystone", "identity", "openstack"] - input_type: log paths: - /var/log/nova/* exclude_files: [".gz$"] fields: tags: ["nova", "compute", "openstack"] # Consecutive lines that don't match the pattern are appended to # the previous line that does match. multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' multiline.negate: true multiline.match: after # After the 120 seconds, Filebeat sends the multiline event even # if no new pattern is found to start a new event. multiline.timeout: input_type: log paths: - /var/log/neutron/* exclude_files: [".gz$"] fields: tags: ["neutron", "network", "openstack"] # Consecutive lines that don't match the pattern are appended to # the previous line that does match. multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' multiline.negate: true multiline.match: after - input_type: log paths: - /var/log/openvswitch/* exclude_files: [".gz$"] fields: tags: ["openvswitch", "openstack"] multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' multiline.negate: true multiline.match: after - input_type: log paths: - /var/log/apache2/* exclude_files: [".gz$", "keystone_access.log"] fields: tags: ["horizon", "dashboard", "apache2", "openstack"] - input_type: log paths: - /var/log/apache2/keystone_access.log exclude_files: [".gz$"] fields: tags: ["horizon", "dashboard", "apache2", "access", "openstack"] # The keystone_access.log file contains lines with no date that only # have the word "combine" on them. You could use the multiline option # to append the word to the previous event or exclude those lines. # multiline.pattern: '^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' # multiline.negate: true Using ELK for NonStop and Virtualized NonStop event logs 151

152 # multiline.match: after # multiline.timeout: 240 exclude_lines: ['^combine'] - input_type: log paths: - /var/log/cinder/* exclude_files: [".gz$"] fields: tags: ["cinder", "block storage", "openstack"] - input_type: log paths: - /var/log/vnonstop/vnonstop-api.* exclude_files: [".gz$"] fields: tags: ["vnonstop", "Virtualized NonStop", "openstack"] # Consecutive lines that don't match the pattern are appended to # the previous line that does match. multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' multiline.negate: true multiline.match: after # After the 120 seconds, Filebeat sends the multiline event even # if no new pattern is found to start a new event. multiline.timeout: 120 Using Kibana in a Linux single server environment In a single Linux server environment, the first three tabs on the left side of the Kibana window are: Discover Visualize Dashboard Discover Tab This tab is used to look at the events over a time span and search for specific information within events. Options for using the Discover tab: 1. Select the index you want to use at the upper left corner of the Discover window. For events from XMA or OpenStack select logstash-*. 2. Time ranges: There is a time picker at the top of the window. Select a relative or absolute time range. TIP: You can also put a time range in the search string in the search bar which is just below the time picker. 3. Search bar: Type a search string in the search bar at the top of the page. Kibana uses the Lucene search syntax which can be used to search within individual fields or to do a search of all fields in the event. The syntax details are at query-dsl-query-string-query.html#query-string-syntax. 152 Using Kibana in a Linux single server environment

153 Here is an example of the Discover tab with the logstash-* index, search of the last 15 minutes, searching for events with "NONSTOP" in the deviceproduct field. This is a CEF formatted event. Click the arrow to the left of the time for each event to see a detailed list of all the fields in the event. This partial example shows a detailed view of a CEF-formatted event that came from a NonStop EMS event log: Using ELK for NonStop and Virtualized NonStop event logs 153

154 Visualize Tab The Visualize tab creates charts and graphs of events. When you create a new visualization by choosing the Visualize tab, you can select a saved visualization from the list or create a new one by clicking the plus sign above the list next to the search bar. For a new visualization: 1. Choose the type (bar, pie, etc.) 2. Start a new search by selecting the index or use a saved search by select one from the list. 3. Select values for x and y axes, and add sub-buckets as needed to meet your search criteria. 4. After changing the data settings, click play to put them into effect. TIP: You can also use the search bar at the top of the page to include or exclude the events you are interested in graphing. 5. To save the visualization, select save at the top of the page. 154 Using ELK for NonStop and Virtualized NonStop event logs

155 NOTE: Selecting save does not save the time range with the visualization. The time range is based on what it currently in the time-picker at the top of the screen. The time range can be saved in the dashboard, so you can add visualizations there and save the time associated with it. Here is a simple example of a bar chart with the event count on the Y-axis, timestamp on the X-axis: Here is an example of a bar chart with the event count on the Y-axis, timestamp on the X-axis, and a subbucket broken down by host: Using ELK for NonStop and Virtualized NonStop event logs 155

156 Here is an example of a pie chart visualization with the inner circle divided by program file name and the outer ring representing the system name: 156 Using ELK for NonStop and Virtualized NonStop event logs

157 Dashboard Tab The Dashboard tab is used to show several visualizations that you have created on one page. When you click on the Dashboard tab, you can either select a previously saved dashboard to load, or use the plus sign to create a new dashboard. Use the add button at the top of the page to add visualizations to your Dashboard. You can resize or move visualizations around by click and drag. Save the dashboard with the save button at the top of the screen. You have the option of checking the box for "store time with dashboard" when saving. Here is an example of a Dashboard with 2 visualizations: Using ELK for NonStop and Virtualized NonStop event logs 157

HPE NonStop X Converged, Virtualised and Virtual Converged

HPE NonStop X Converged, Virtualised and Virtual Converged HPE NonStop X Converged, Virtualised and Virtual Converged ain Liston-Brown BTUG May 2018 Agenda Why NonStop? Converged and Virtualised NonStop X OpenStack with Virtualised NonStop VMware with Virtualised

More information

HPE HELION CLOUDSYSTEM 9.0. Copyright 2015 Hewlett Packard Enterprise Development LP

HPE HELION CLOUDSYSTEM 9.0. Copyright 2015 Hewlett Packard Enterprise Development LP HPE HELION CLOUDSYSTEM 9.0 HPE Helion CloudSystem Foundation CloudSystem Foundation Key Use Cases Automate dev/test CICD on OpenStack technology compatible infrastructure Accelerate cloud-native application

More information

VMware vsphere Storage Appliance Installation and Configuration

VMware vsphere Storage Appliance Installation and Configuration VMware vsphere Storage Appliance Installation and Configuration vsphere Storage Appliance 1.0 vsphere 5.0 This document supports the version of each product listed and supports all subsequent versions

More information

QuickSpecs. HP Z 10GbE Dual Port Module. Models

QuickSpecs. HP Z 10GbE Dual Port Module. Models Overview Models Part Number: 1Ql49AA Introduction The is a 10GBASE-T adapter utilizing the Intel X722 MAC and X557-AT2 PHY pairing to deliver full line-rate performance, utilizing CAT 6A UTP cabling (or

More information

Red Hat OpenStack Platform 10

Red Hat OpenStack Platform 10 Red Hat OpenStack Platform 10 Network Functions Virtualization Planning Guide Planning for NFV in Red Hat OpenStack Platform 10 Last Updated: 2018-03-01 Red Hat OpenStack Platform 10 Network Functions

More information

Intelligent Provisioning 2.70 Release Notes

Intelligent Provisioning 2.70 Release Notes Intelligent Provisioning 2.70 Release Notes Part Number: 794361-401a Published: December 2017 Edition: 2 Copyright 2012, 2017 Hewlett Packard Enterprise Development LP Notices The information contained

More information

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HPE VMware ESXi and vsphere. Part Number: 818330-003 Published: April

More information

Hewlett Packard Enterprise StoreOnce 3100, 3500 and 5100 System Installation and Configuration Guide

Hewlett Packard Enterprise StoreOnce 3100, 3500 and 5100 System Installation and Configuration Guide Hewlett Packard Enterprise StoreOnce 3100, 3500 and 5100 System Installation and Configuration Guide Abstract This guide is for HPE StoreOnce System Administrators. It assumes that the user has followed

More information

HPE Digital Learner OpenStack Content Pack

HPE Digital Learner OpenStack Content Pack Content Pack data sheet HPE Digital Learner OpenStack Content Pack HPE Content Pack number Content Pack category Content Pack length Learn more CP001 Category 1 20 Hours View now Why HPE Education Services?

More information

HP LeftHand P4500 and P GbE to 10GbE migration instructions

HP LeftHand P4500 and P GbE to 10GbE migration instructions HP LeftHand P4500 and P4300 1GbE to 10GbE migration instructions Part number: AT022-96003 edition: August 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company, L.P. Confidential

More information

HPE Helion OpenStack Carrier Grade 1.1 Release Notes HPE Helion

HPE Helion OpenStack Carrier Grade 1.1 Release Notes HPE Helion HPE Helion OpenStack Carrier Grade 1.1 Release Notes 2017-11-14 HPE Helion Contents HP Helion OpenStack Carrier Grade 1.1: Release Notes... 3 Changes in This Release... 3 Usage Caveats...4 Known Problems

More information

HP StoreOnce Recovery Manager Central for VMware User Guide

HP StoreOnce Recovery Manager Central for VMware User Guide HP StoreOnce Recovery Manager Central 1.2.0 for VMware User Guide Abstract The guide is intended for VMware and database administrators who are responsible for backing up databases. This guide provides

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check

More information

DEEP DIVE: OPENSTACK COMPUTE

DEEP DIVE: OPENSTACK COMPUTE DEEP DIVE: OPENSTACK COMPUTE Stephen Gordon Technical Product Manager, Red Hat @xsgordon AGENDA OpenStack architecture refresher Compute architecture Instance life cycle Scaling compute

More information

HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide

HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide AT459-96002 Part number: AT459-96002 First edition: April 2009 Legal and notice information Copyright 2009 Hewlett-Packard

More information

Helion OpenStack Carrier Grade 4.0 RELEASE NOTES

Helion OpenStack Carrier Grade 4.0 RELEASE NOTES Helion OpenStack Carrier Grade 4.0 RELEASE NOTES 4.0 Copyright Notice Copyright 2016 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice. The

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

BIG-IP Virtual Edition and Linux KVM: Setup. Version 12.1

BIG-IP Virtual Edition and Linux KVM: Setup. Version 12.1 BIG-IP Virtual Edition and Linux KVM: Setup Version 12.1 Table of Contents Table of Contents Getting Started with BIG-IP Virtual Edition on KVM...5 Steps to deploy BIG-IP VE...5 Prerequisites for BIG-IP

More information

DEPLOYING NFV: BEST PRACTICES

DEPLOYING NFV: BEST PRACTICES DEPLOYING NFV: BEST PRACTICES Rimma Iontel Senior Cloud Architect, Cloud Practice riontel@redhat.com Julio Villarreal Pelegrino Principal Architect, Cloud Practice julio@redhat.com INTRODUCTION TO NFV

More information

QLOGIC SRIOV Fuel Plugin Documentation

QLOGIC SRIOV Fuel Plugin Documentation QLOGIC SRIOV Fuel Plugin Documentation Release 1.0-1.0.0-1 QLOGIC Jul 19, 2016 CONTENTS 1 Overview of the QLogic SRIOV Fuel Plugin 1 1.1 Software Prerequisites..........................................

More information

HP LeftHand SAN Solutions

HP LeftHand SAN Solutions HP LeftHand SAN Solutions Support Document Installation Manuals VSA 8.0 Quick Start - Demo Version Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty

More information

HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide

HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HP VMware ESXi and vsphere. HP Part Number: 616896-409 Published: September

More information

Installation and Cluster Deployment Guide for KVM

Installation and Cluster Deployment Guide for KVM ONTAP Select 9 Installation and Cluster Deployment Guide for KVM Using ONTAP Select Deploy 2.7 March 2018 215-12976_B0 doccomments@netapp.com Updated for ONTAP Select 9.3 Table of Contents 3 Contents

More information

Installation and Cluster Deployment Guide

Installation and Cluster Deployment Guide ONTAP Select 9 Installation and Cluster Deployment Guide Using ONTAP Select Deploy 2.3 March 2017 215-12086_B0 doccomments@netapp.com Updated for ONTAP Select 9.1 Table of Contents 3 Contents Deciding

More information

Build Cloud like Rackspace with OpenStack Ansible

Build Cloud like Rackspace with OpenStack Ansible Build Cloud like Rackspace with OpenStack Ansible https://etherpad.openstack.org/p/osa-workshop-01 Jirayut Nimsaeng DevOps & Cloud Architect 2nd Cloud OpenStack-Container Conference and Workshop 2016 Grand

More information

HP Helion CloudSystem 9.0 Administrator Guide

HP Helion CloudSystem 9.0 Administrator Guide HP Helion CloudSystem 9.0 Administrator Guide Abstract This information is for use by administrators using HP Helion CloudSystem Software 9.0, who are assigned to configure and provision compute resources

More information

HPE FlexFabric 5950 Switch Series

HPE FlexFabric 5950 Switch Series HPE FlexFabric 5950 Switch Series About the HPE FlexFabric 5950 Configuration Guides Part number: 5200-0808 Software version: Release 6106 and later Document version: 6W100-20160513 Copyright 2016 Hewlett

More information

HPE ilo Federation User Guide for ilo 5

HPE ilo Federation User Guide for ilo 5 HPE ilo Federation User Guide for ilo 5 Abstract This guide explains how to configure and use the HPE ilo Federation features. It is intended for system administrators, Hewlett Packard Enterprise representatives,

More information

BIG-IP Virtual Edition and VMware ESXi: Setup. Version 12.1

BIG-IP Virtual Edition and VMware ESXi: Setup. Version 12.1 BIG-IP Virtual Edition and VMware ESXi: Setup Version 12.1 Table of Contents Table of Contents Getting Started with BIG-IP Virtual Edition on ESXi...5 Steps to deploy BIG-IP VE...5 Prerequisites for BIG-IP

More information

Contrail Release Release Notes

Contrail Release Release Notes Contrail Release 3.2.11 Release Notes Release 3.2.11 June 2018 Contents Introduction........................................................ 3 New and Changed Features............................................

More information

HP integrated Citrix XenServer Online Help

HP integrated Citrix XenServer Online Help HP integrated Citrix XenServer Online Help Part Number 486855-002 September 2008 (Second Edition) Copyright 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

HPE StoreOnce 3100, 3500, 5100, and 5500 System Installation and Configuration Guide

HPE StoreOnce 3100, 3500, 5100, and 5500 System Installation and Configuration Guide HPE StoreOnce 3100, 3500, 5100, and 5500 System Installation and Configuration Guide Abstract This guide is for HPE StoreOnce System Administrators. It assumes that the user has followed the instructions

More information

HPE Network Virtualization

HPE Network Virtualization HPE Network Virtualization Software Version: 9.10 Installation Guide Document Release Date: June 2016 Software Release Date: June 2016 HPE Network Virtualization Legal Notices Warranty The only warranties

More information

Installation and Cluster Deployment Guide for KVM

Installation and Cluster Deployment Guide for KVM ONTAP Select 9 Installation and Cluster Deployment Guide for KVM Using ONTAP Select Deploy 2.9 August 2018 215-13526_A0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents

More information

"Charting the Course... H8Q14S HPE Helion OpenStack. Course Summary

Charting the Course... H8Q14S HPE Helion OpenStack. Course Summary Course Summary Description This course will take students through an in-depth look at HPE Helion OpenStack V5.0. The course flow is optimized to address the high-level architecture and HPE Helion OpenStack

More information

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/ The VMware

More information

HPE 3PAR OS GA Patch 12

HPE 3PAR OS GA Patch 12 HPE 3PAR OS 3.3.1 GA Patch 12 Upgrade Instructions Abstract This upgrade instructions document is for installing Patch 12 on the HPE 3PAR Operating System Software OS-3.3.1.215-GA. This document is for

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.8 June 2018 215-13347_B0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents

More information

VMware Integrated OpenStack User Guide

VMware Integrated OpenStack User Guide VMware Integrated OpenStack User Guide VMware Integrated OpenStack 3.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a

More information

HPE Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.60

HPE Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.60 HPE Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.60 Abstract This document contains setup, installation, and configuration information for HPE Virtual Connect. This document

More information

Installing VMware vsphere 5.1 Components

Installing VMware vsphere 5.1 Components Installing VMware vsphere 5.1 Components Module 14 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

Contrail Release Release Notes

Contrail Release Release Notes Contrail Release 3.2.10 Release Notes Release 3.2.10 May 2018 Contents Introduction........................................................ 3 New and Changed Features............................................

More information

VMware Integrated OpenStack User Guide. VMware Integrated OpenStack 4.1

VMware Integrated OpenStack User Guide. VMware Integrated OpenStack 4.1 VMware Integrated OpenStack User Guide VMware Integrated OpenStack 4.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about

More information

HPE FlexFabric 7900 Switch Series

HPE FlexFabric 7900 Switch Series HPE FlexFabric 7900 Switch Series VXLAN Configuration Guide Part number: 5998-8254R Software version: Release 213x Document version: 6W101-20151113 Copyright 2015 Hewlett Packard Enterprise Development

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

HPE 3PAR Remote Copy Extension Software Suite Implementation Service

HPE 3PAR Remote Copy Extension Software Suite Implementation Service Data sheet HPE 3PAR Remote Copy Extension Software Suite Implementation Service HPE Lifecycle Event Services HPE 3PAR Remote Copy Extension Software Suite Implementation Service provides customized deployment

More information

INSTALLATION RUNBOOK FOR Triliodata + TrilioVault

INSTALLATION RUNBOOK FOR Triliodata + TrilioVault INSTALLATION RUNBOOK FOR Triliodata + TrilioVault Application Type: [Backup and disaster recovery] Application Version: [2.1] MOS Version: [7.0] OpenStack version: [Kilo] Content Document History 1 Introduction

More information

Install ISE on a VMware Virtual Machine

Install ISE on a VMware Virtual Machine Supported VMware Versions, page 1 Support for VMware vmotion, page 1 Support for Open Virtualization Format, page 2 Virtual Machine Requirements, page 3 Virtual Machine Resource and Performance Checks,

More information

HPE 3PAR OS MU3 Patch 18 Upgrade Instructions

HPE 3PAR OS MU3 Patch 18 Upgrade Instructions HPE 3PAR OS 3.1.3 MU3 Patch 18 Upgrade Instructions This upgrade instructions document is for installing Patch 18 on the HPE 3PAR Operating System Software 3.1.3.334 (MU3). This document is for Hewlett

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.6 November 2017 215-12636_B0 doccomments@netapp.com Updated for ONTAP Select 9.3 Table of Contents 3 Contents

More information

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0 VMware Integrated OpenStack with Kubernetes Getting Started Guide VMware Integrated OpenStack 4.0 VMware Integrated OpenStack with Kubernetes Getting Started Guide You can find the most up-to-date technical

More information

HPE VAN SDN Controller and Applications Support Matrix

HPE VAN SDN Controller and Applications Support Matrix Controller and Applications Support Matrix Abstract This document lists the minimum hardware, firmware, and software requirements for installing the (Virtual Application Network Software-Defined Networking)

More information

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

ilo Amplifier Pack User Guide

ilo Amplifier Pack User Guide ilo Amplifier Pack User Guide Abstract This guide provides information about installing, configuring, and operating ilo Amplifier Pack. Part Number: P04465-001 Published: December 2017 Edition: 4 Copyright

More information

Intelligent Provisioning 3.10 Release Notes

Intelligent Provisioning 3.10 Release Notes Intelligent Provisioning 3.10 Release Notes Part Number: 881705-002 Published: February 2018 Edition: 1 Copyright 2017, 2018 Hewlett Packard Enterprise Development LP Notices The information contained

More information

Intelligent Provisioning 1.70 Release Notes

Intelligent Provisioning 1.70 Release Notes Intelligent Provisioning 1.70 Release Notes Part Number: 680065-408 Published: October 2017 Edition: 1 Copyright 2012, 2017 Hewlett Packard Enterprise Development LP Notices The information contained herein

More information

IBM Single Sign On for Bluemix Version December Identity Bridge Configuration topics

IBM Single Sign On for Bluemix Version December Identity Bridge Configuration topics IBM Single Sign On for Bluemix Version 2.0 28 December 2014 Identity Bridge Configuration topics IBM Single Sign On for Bluemix Version 2.0 28 December 2014 Identity Bridge Configuration topics ii IBM

More information

Cisco Cloud Services Platform 2100 Quick Start Guide, Release 2.2.5

Cisco Cloud Services Platform 2100 Quick Start Guide, Release 2.2.5 Cisco Cloud Services Platform 2100 Quick Start Guide, Release 2.2.5 First Published: 2018-03-30 Summary Steps Setting up your Cisco Cloud Services Platform 2100 (Cisco CSP 2100) and creating services consists

More information

INSTALLATION RUNBOOK FOR Netronome Agilio OvS. MOS Version: 8.0 OpenStack Version:

INSTALLATION RUNBOOK FOR Netronome Agilio OvS. MOS Version: 8.0 OpenStack Version: INSTALLATION RUNBOOK FOR Netronome Agilio OvS Product Name: Agilio OvS Driver Version: 2.2-r4603 MOS Version: 8.0 OpenStack Version: Liberty Product Type: Network Offload Driver 1. Introduction 1.1 Target

More information

Using the vrealize Orchestrator OpenStack Plug-In 2.0. Modified on 19 SEP 2017 vrealize Orchestrator 7.0

Using the vrealize Orchestrator OpenStack Plug-In 2.0. Modified on 19 SEP 2017 vrealize Orchestrator 7.0 Using the vrealize Orchestrator OpenStack Plug-In 2.0 Modified on 19 SEP 2017 vrealize Orchestrator 7.0 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

HPE 3PAR OS MU5 Patch 49 Release Notes

HPE 3PAR OS MU5 Patch 49 Release Notes HPE 3PAR OS 3.2.1 MU5 Patch 49 Release Notes This release notes document is for Patch 49 and intended for HPE 3PAR Operating System Software + P39. Part Number: QL226-99362a Published: October 2016 Edition:

More information

HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide

HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide This document provides device overview information, installation best practices and procedural overview, and illustrated

More information

HPE Intelligent Management Center

HPE Intelligent Management Center HPE Intelligent Management Center VAN Connection Manager Administrator Guide Abstract This guide contains comprehensive information for network administrators, engineers, and operators who manage the VAN

More information

Virtual Recovery Assistant user s guide

Virtual Recovery Assistant user s guide Virtual Recovery Assistant user s guide Part number: T2558-96323 Second edition: March 2009 Copyright 2009 Hewlett-Packard Development Company, L.P. Hewlett-Packard Company makes no warranty of any kind

More information

HPE NFV Director. User Guide Release Sixth Edition

HPE NFV Director. User Guide Release Sixth Edition HPE NFV Director User Guide Release 4.2.1 Sixth Edition Notices Legal notice Copyright 2017 Hewlett Packard Enterprise Development LP Confidential computer software. Valid license from HPE required for

More information

vrealize Operations Management Pack for NSX for vsphere 2.0

vrealize Operations Management Pack for NSX for vsphere 2.0 vrealize Operations Management Pack for NSX for vsphere 2.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

HP 3PAR OS MU3 Patch 18 Release Notes

HP 3PAR OS MU3 Patch 18 Release Notes HP 3PAR OS 3.2.1 MU3 Patch 18 Release Notes This release notes document is for Patch 18 and intended for HP 3PAR Operating System Software 3.2.1.292 (MU3). HP Part Number: QL226-98326 Published: August

More information

VMware vsphere Administration Training. Course Content

VMware vsphere Administration Training. Course Content VMware vsphere Administration Training Course Content Course Duration : 20 Days Class Duration : 3 hours per day (Including LAB Practical) Fast Track Course Duration : 10 Days Class Duration : 8 hours

More information

Dell EMC Ready Architecture for Red Hat OpenStack Platform

Dell EMC Ready Architecture for Red Hat OpenStack Platform Dell EMC Ready Architecture for Red Hat OpenStack Platform Release Notes Version 13 Dell EMC Service Provider Solutions ii Contents Contents List of Tables...iii Trademarks... iv Notes, cautions, and warnings...v

More information

Install ISE on a VMware Virtual Machine

Install ISE on a VMware Virtual Machine Supported VMware Versions, page 1 Support for VMware vmotion, page 1 Support for Open Virtualization Format, page 2 Virtual Machine Requirements, page 3 Virtual Machine Resource and Performance Checks,

More information

vsphere Networking 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

vsphere Networking 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about

More information

HP Intelligent Management Center Remote Site Management User Guide

HP Intelligent Management Center Remote Site Management User Guide HP Intelligent Management Center Remote Site Management User Guide Abstract This book provides overview and procedural information for Remote Site Management, an add-on service module to the Intelligent

More information

VMware Integrated OpenStack Administrator Guide

VMware Integrated OpenStack Administrator Guide VMware Integrated OpenStack Administrator Guide VMware Integrated OpenStack 2.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems.

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems. OMi Management Pack for Microsoft Software Version: 1.01 For the Operations Manager i for Linux and Windows operating systems User Guide Document Release Date: April 2017 Software Release Date: December

More information

HP Helion CloudSystem 9.0 Update 1 Installation Guide

HP Helion CloudSystem 9.0 Update 1 Installation Guide HP Helion CloudSystem 9.0 Update 1 Installation Guide About this guide This information is for use by administrators using HP Helion CloudSystem Software 9.0 Update 1, who are assigned to configure and

More information

QuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2.

QuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2. Overview 1. Product description 2. Product features 1. Product description HPE Ethernet 10Gb 2-port 535FLR-T adapter 1 HPE Ethernet 10Gb 2-port 535T adapter The HPE Ethernet 10GBase-T 2-port 535 adapters

More information

Cisco Cloud Services Platform 2100 Quick Start Guide, Release 2.2.0

Cisco Cloud Services Platform 2100 Quick Start Guide, Release 2.2.0 Cisco Cloud Services Platform 2100 Quick Start Guide, Release 2.2.0 First Published: 2017-03-15 Last Modified: 2017-08-03 Summary Steps Setting up your Cisco Cloud Services Platform 2100 (Cisco CSP 2100)

More information

HPE 3PAR OS MU3 Patch 24 Release Notes

HPE 3PAR OS MU3 Patch 24 Release Notes HPE 3PAR OS 3.1.3 MU3 Patch 24 Release Notes This release notes document is for Patch 24 and intended for HPE 3PAR Operating System Software + P19. Part Number: QL226-99298 Published: August 2016 Edition:

More information

HPE XP7 Performance Advisor Software 7.2 Release Notes

HPE XP7 Performance Advisor Software 7.2 Release Notes HPE XP7 Performance Advisor Software 7.2 Release Notes Part Number: T1789-96464a Published: December 2017 Edition: 2 Copyright 1999, 2017 Hewlett Packard Enterprise Development LP Notices The information

More information

vsphere Host Profiles Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Host Profiles Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this

More information

HPE ConvergedSystem 700 for Hyper-V Deployment Accelerator Service

HPE ConvergedSystem 700 for Hyper-V Deployment Accelerator Service Data sheet HPE ConvergedSystem 700 for Hyper-V Deployment Accelerator Service HPE Technology Consulting HPE ConvergedSystem 700 for Hyper-V is a solution that allows you to acquire and deploy a virtualization

More information

Veeam Cloud Connect. Version 8.0. Administrator Guide

Veeam Cloud Connect. Version 8.0. Administrator Guide Veeam Cloud Connect Version 8.0 Administrator Guide June, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be reproduced,

More information

HPE 3PAR OS MU3 Patch 97 Upgrade Instructions

HPE 3PAR OS MU3 Patch 97 Upgrade Instructions HPE 3PAR OS 3.2.2 MU3 Patch 97 Upgrade Instructions Abstract This upgrade instructions document is for installing Patch 97 on the HPE 3PAR Operating System Software. This document is for Hewlett Packard

More information

HP Helion OpenStack Carrier Grade 1.1: Release Notes

HP Helion OpenStack Carrier Grade 1.1: Release Notes HP Helion OpenStack Carrier Grade 1.1: Release Notes HP Helion OpenStack Carrier Grade Contents 2 Contents HP Helion OpenStack Carrier Grade 1.1: Release Notes...3 Changes in This Release... 5 Usage Caveats...7

More information

Cisco Virtual Networking Solution for OpenStack

Cisco Virtual Networking Solution for OpenStack Data Sheet Cisco Virtual Networking Solution for OpenStack Product Overview Extend enterprise-class networking features to OpenStack cloud environments. A reliable virtual network infrastructure that provides

More information

HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide

HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide Abstract This guide provides information about developing encryption key management processes, configuring the tape autoloader

More information

HP 3PAR OS MU3 Patch 17

HP 3PAR OS MU3 Patch 17 HP 3PAR OS 3.2.1 MU3 Patch 17 Release Notes This release notes document is for Patch 17 and intended for HP 3PAR Operating System Software. HP Part Number: QL226-98310 Published: July 2015 Edition: 1 Copyright

More information

Storage Protocol Offload for Virtualized Environments Session 301-F

Storage Protocol Offload for Virtualized Environments Session 301-F Storage Protocol Offload for Virtualized Environments Session 301-F Dennis Martin, President August 2016 1 Agenda About Demartek Offloads I/O Virtualization Concepts RDMA Concepts Overlay Networks and

More information

Introduction to HPE ProLiant Servers HE643S

Introduction to HPE ProLiant Servers HE643S Course data sheet Introduction to HPE ProLiant Servers HE643S HPE course number Course length Delivery mode View schedule, local pricing, and register View related courses HE643S 2 Days ILT, VILT View

More information

AMD EPYC Processors Showcase High Performance for Network Function Virtualization (NFV)

AMD EPYC Processors Showcase High Performance for Network Function Virtualization (NFV) White Paper December, 2018 AMD EPYC Processors Showcase High Performance for Network Function Virtualization (NFV) Executive Summary Data centers and cloud service providers are creating a technology shift

More information

vedge Cloud Datasheet PRODUCT OVERVIEW DEPLOYMENT USE CASES EXTEND VIPTELA OVERLAY INTO PUBLIC CLOUD ENVIRONMENTS

vedge Cloud Datasheet PRODUCT OVERVIEW DEPLOYMENT USE CASES EXTEND VIPTELA OVERLAY INTO PUBLIC CLOUD ENVIRONMENTS vedge Cloud Datasheet PRODUCT OVERVIEW Viptela vedge Cloud is a software router platform that supports entire range of capabilities available on the physical vedge-100, vedge-1000 and vedge-2000 router

More information

Oracle Enterprise Manager Ops Center

Oracle Enterprise Manager Ops Center Oracle Enterprise Manager Ops Center Configure and Install Guest Domains 12c Release 3 (12.3.2.0.0) E60042-03 June 2016 This guide provides an end-to-end example for how to use Oracle Enterprise Manager

More information

Deploy the ASAv Using KVM

Deploy the ASAv Using KVM You can deploy the ASAv using the Kernel-based Virtual Machine (KVM). About ASAv Deployment Using KVM, on page 1 Prerequisites for the ASAv and KVM, on page 2 Prepare the Day 0 Configuration File, on page

More information

HPE FlexNetwork MSR Router Series

HPE FlexNetwork MSR Router Series HPE FlexNetwork MSR Router Series About the HPE MSR Router Series Command s Part number: 5998-8799 Software version: CMW710-R0305 Document version: 6PW106-20160308 Copyright 2016 Hewlett Packard Enterprise

More information

HP FlexFabric Virtual Switch 5900v Technology White Paper

HP FlexFabric Virtual Switch 5900v Technology White Paper HP FlexFabric Virtual Switch 5900v Technology White Paper Part number: 5998-4548 Document version: 6W100-20131220 Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

HP SDN Document Portfolio Introduction

HP SDN Document Portfolio Introduction HP SDN Document Portfolio Introduction Technical Solution Guide Version: 1 September 2013 Table of Contents HP SDN Document Portfolio Overview... 2 Introduction... 2 Terms and Concepts... 2 Resources,

More information

Intelligent Provisioning 3.00 Release Notes

Intelligent Provisioning 3.00 Release Notes Intelligent Provisioning 3.00 Release Notes Part Number: 881705-001b Published: October 2017 Edition: 3 Copyright 2017 Hewlett Packard Enterprise Development LP Notices The information contained herein

More information

HPE FlexNetwork MSR Router Series

HPE FlexNetwork MSR Router Series HPE FlexNetwork MSR Router Series About the HPE MSR Router Series Configuration Part number: 5998-8821 Software version: CMW710-R0305 Document version: 6PW106-20160308 Copyright 2016 Hewlett Packard Enterprise

More information