IBM. Linux. Configuring IBM PowerKVM on Power systems

Similar documents
Linux. Configuring IBM PowerKVM on Power systems

LVM Administrator's Guide. Configuration and Administration 5.2

Red Hat Enterprise Linux 5 Logical Volume Manager Administration. LVM Administrator's Guide Edition 1

Red Hat Enterprise Linux 7

Changing user login password on templates

Red Hat Enterprise Linux 4 Cluster Logical Volume Manager. LVM Administrator's Guide

Commands LVM information can be created, displayed, and manipulated with the following commands:

Red Hat Enterprise Linux 6

7. Try shrinking / -- what happens? Why? Cannot shrink the volume since we can not umount the / logical volume.

BT Cloud Compute. Adding a Volume to an existing VM running Linux. The power to build your own cloud solutions to serve your specific business needs

Adding a block devices and extending file systems in Linux environments

CIS UNIX System Administration

Red Hat Enterprise Linux 4 Cluster Logical Volume Manager

As this method focuses on working with LVM, we will first confirm that our partition type is actually Linux LVM by running the below command.

Virtualization Provisioning & Centralized Management with iscsi. RACF Brookhaven National Laboratory James Pryor CHEP 2010

An introduction to Logical Volume Management

Getting Started with Pentaho and Cloudera QuickStart VM

The Contents and Structure of this Manual. This document is composed of the following ten chapters.

This section describes the procedures needed to add a new disk to a VM. vmkfstools -c 4g /vmfs/volumes/datastore_name/vmname/xxxx.

Cloning and Repartitioning sessionmgr Disks

System p. Partitioning with the Integrated Virtualization Manager

Virtual Server Management

LVM Migration from Legacy to Agile Naming Model HP-UX 11i v3

Enterprise Linux System Administration

Back Up (And Restore) LVM Partitions With LVM Snapshots

1 LINUX KERNEL & DEVICES

Disks, Filesystems Todd Kelley CST8177 Todd Kelley 1

vsphere Replication for Disaster Recovery to Cloud vsphere Replication 8.1

Installation and Cluster Deployment Guide for KVM

Red Hat Enterprise Linux 5 Logical Volume Manager Administration. LVM Administrator's Guide

Linux System Administration, level 1. Lecture 4: Partitioning and Filesystems Part II: Tools & Methods

Controller Installation

"Charting the Course... Enterprise Linux System Administration Course Summary

HP integrated Citrix XenServer 5.0 Release Notes

MFT / Linux Setup Documentation May 25, 2008

Design of a Cluster Logical Volume Manager Abhinay Ramesh Kampasi Pune Institute of Computer Technology, University of Pune.

Citrix 1Y0-A26. Citrix XenServer 6.0 Administration. Download Full Version :

RHEL 5 Essentials. Red Hat Enterprise Linux 5 Essentials

How to increase XenServer virtual machine root or swap partition

NSFOCUS WAF (VM) User Guide

System Administration. Storage Systems

Red Hat Enterprise Linux 6 Logical Volume Manager Administration. LVM Administrator Guide

Availability Implementing High Availability with the solution-based approach Operator's guide

HP-UX System Administrator's Guide: Logical Volume Management

vsphere Replication for Disaster Recovery to Cloud

Using the VMware vrealize Orchestrator Client

Data Protection Guide

"Charting the Course... Enterprise Linux System Administration. Course Summary

CXS Citrix XenServer 6.0 Administration

Fedora 12 Essentials

Dell License Manager Version 1.2 User s Guide

vsphere Replication for Disaster Recovery to Cloud vsphere Replication 6.5

10 Having Hot Spare disks available in the system is strongly recommended. There are two types of Hot Spare disks:

Configuring and Managing Virtual Storage

vsphere Host Profiles Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Install ISE on a VMware Virtual Machine

Disaster Recovery Workflow

Installation and Cluster Deployment Guide for KVM

Control Center Planning Guide

Installing the Cisco CSR 1000v in VMware ESXi Environments

How to add additional disks to XenServer host

Manage your disk space... for free :)

Data Protection Guide

Virtual Storage Console 6.1 for VMware vsphere

Install ISE on a VMware Virtual Machine

JSA KVM SUPPORT. Theodore Jencks, CSE Juniper Networks

Archiware Pure User Manual

Replication is the process of creating an

Data Protection Guide

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtual Services Container

Using the vgmove command to perform LVM Volume Group migration

Managing RAID. Entering the RAID Management Command Environment CHAPTER

Course CXS-203 Citrix XenServer 6.0 Administration

Install ISE on a VMware Virtual Machine

Troubleshooting Cisco APIC-EM Single and Multi-Host

Cluster Management Workflows for OnCommand System Manager

SA3 E7 Advanced Linux System Administration III Internet Network Services and Security

Using the VMware vcenter Orchestrator Client. vrealize Orchestrator 5.5.1

KVM Guest Management With Virt-Manager On Ubuntu 8.10

Client Installation and User's Guide

Zadara Enterprise Storage in

Install ISE on a VMware Virtual Machine

XenServer Release Notes

Entering the RAID Management Command Environment

exacqvision Virtual Appliance User Manual

VMware vsphere 5.5 Professional Bootcamp

Getting Started with ovirt 3.3

AMD RAID Installation Guide

IBM. Systems management Disk management. IBM i 7.1

KVM Forum Vancouver, Daniel P. Berrangé

vsphere Storage Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Install ISE on a VMware Virtual Machine

HiveManager Virtual Appliance QuickStart

VMware VMFS Volume Management VMware Infrastructure 3

American Dynamics RAID Storage System iscsi Software User s Manual

Control Center Planning Guide

Web Self Service Administrator Guide. Version 1.1.2

XenServer Demo/Evaluation System

Virtualization Management the ovirt way

Transcription:

IBM Linux Configuring IBM PowerKVM on Power systems

IBM Linux Configuring IBM PowerKVM on Power systems

Note Before using this information and the product it supports, read the information in Notices on page 69. Third Edition (November 2015) Copyright IBM Corporation 2014, 2015. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents Configuring IBM PowerKVM on Power Systems.............. 1 Creating guests............. 1 Creating a guest with Kimchi........ 1 Setting up a template using Kimchi...... 2 Creating a guest with the virsh command... 2 Creating guests with the virt-install command 3 Common virsh command options....... 3 PowerKVM storage............ 4 Common virsh command options - Storage... 4 Find storage pool sources with virsh..... 5 Valid storage pools types......... 6 Storage pool............. 7 Setting up a storage pool with Kimchi.... 7 Creating storage pool with virsh..... 8 PowerKVM Logical Volume Manager..... 9 Logical Volume Manager components.... 9 Logical Volume Manager commands.... 10 Setting up the Logical Volume Manager logical volumes........... 14 Creating physical volumes...... 14 Replacing physical volumes that are missing............. 15 Removing lost physical volumes from a volume group.......... 15 Creating volume groups....... 15 Creating logical volumes....... 16 Logical Volume Manager troubleshooting.. 16 Displaying information on failed devices 17 Recovering metadata of physical volumes 17 Handling errors that indicate insufficient free extents........... 18 Handling warnings that indicate duplicate PV for multipathed devices...... 19 Networking on PowerKVM......... 20 Common virsh command options - Networking 21 Setting up a network connection with Kimchi.. 22 Creating a network bridge with the ibm-configure-system utility....... 22 Verifying the default virtual network..... 23 Configuring KVM guests to use bridge.... 23 Adding guest para-virtualized network devices with libvirt............. 24 Using vhost-net for high-bandwidth applications 24 Hot-plugging a network connection..... 25 VLAN segmentation.......... 25 Configuring VLAN segmentation..... 26 Configuring 802.1q VLANs in Kimchi... 28 PowerKVM and Quality of Service..... 28 Remote management of PowerKVM with libvirt.. 29 Overview of PowerKVM remote management.. 29 Remote management with SSH tunnels.... 31 Managing PowerKVM guests remotely with the virsh command......... 31 Displaying the remote PowerKVM VNC console with any VNC client...... 32 Remote management with SASL authentication and encryption............ 33 Remote management with TLS....... 34 Step 1. Creating a CA key and certificate in your PowerKVM host......... 35 Step 2. Creating the client and server keys and certificates in your PowerKVM host.... 36 Step 3. Distributing keys and certificates to the PowerKVM host server........ 38 Step 4. Distributing keys and certificates to clients or management stations...... 39 Step 5. Editing the libvirtd daemon configuration............ 39 Step 6. Changing the firewall configuration.. 40 Step 7. Verifying that remote management is working............. 40 User management............ 41 KVM guest migration........... 42 Common virsh command options: Migrate... 43 Migrating live with virsh command..... 43 Migrating a KVM guest offline with migrate option............... 43 Migrating a KVM guest offline with dumpxml.. 44 The svirt service............. 44 Overview of the svirt service....... 44 Creating static svirt labels........ 45 Verifying svirt labeling by examining the labels 46 Verifying svirt labeling by viewing the domain.xml file............ 47 Manage PowerKVM resources........ 48 Common virsh command options - Resources.. 48 Manage processors and memory...... 48 Simultaneous Multi-Threading (SMT).... 48 Dynamic micro-threading........ 49 Enabling micro-threading........ 50 Processor pinning.......... 51 Over-committing processor and memory resources............. 52 Configuring huge pages........ 53 Memory ballooning.......... 54 Enabling Kernel Same-page Merge (KSM).. 54 Enabling PCI pass-through........ 56 Memory and CPU hot plug........ 58 Hot plugging CPUs in a PowerKVM guest.. 59 CPU hot plugging and NUMA guests... 61 Hot plugging memory in PowerKVM guest. 62 Developing software for the PowerKVM host... 63 Update PowerKVM............ 63 Updating PowerKVM with Kimchi..... 63 Updating PowerKVM with yum....... 64 Updating PowerKVM with the ibm-update-system utility......... 64 Updating the firmware.......... 64 Monitor and debug information....... 66 Copyright IBM Corp. 2014, 2015 iii

Install and update packages......... 66 Migrating to IBM PowerKVM from IBM PowerVM 66 Migrating to IBM PowerVM from IBM PowerKVM 67 Trademarks.............. 70 Notices.............. 69 Privacy policy considerations........ 70 iv Linux: Configuring IBM PowerKVM on Power systems

Configuring IBM PowerKVM on Power Systems Use these instructions for installing and configuring PowerKVM on Power Systems. Creating guests You can create guests by using Kimchi or by using the command line interfaces. Creating a guest with Kimchi Use Kimchi to create virtual machines, or guests. Procedure 1. Open a browser and go to https://ip_address:8001 where ip_address is the IP address of your KVM system. 2. Select the Guests page. 3. Click the green plus sign (+) to create a guest. 4. On the Guests page, enter a name for the virtual machine guest. Select a template from the list. If no templates exist, click Create a template. If you want a different template from the ones that are displayed, select the Templates page and create or edit one. 5. Select the source media for the template. 6. Select an ISO image. 7. Click Create. The guest is created but not started. 8. Start the guest by clicking the red power button or by selecting Start from the Actions menu. The button changes to green as the guest starts. 9. To connect to the remote console (Livetile), select Connect from the Action menu Results Note: v If you create a non-persistent guest outside of Kimchi and then use Kimchi to stop it, the guest is deleted. In Kimchi, the Stop option calls the virsh destroy command, which deletes non-persistent guests. To avoid this issue, either use the virsh shutdown guest command where guest is the name of your virtual machine or the Kimchi console to shut down the guest within the guest operating system. v If you are using a logical storage pool for storage, creating a guest disk with the same capacity as the remaining pool capacity can fail. Libvirt does not report the exact number of extents that are required for a disk. Consider creating a guest that uses 1G less than what Kimchi and libvirt report as available. If you have trouble starting a guest that has storage that is allocated in a logical storage pool, check your system or Kimchi log files for the error libvirterror: internal error: Child process (/usr/sbin/lvcreate --name some_image.img -L [X]K some_pool_name) unexpected exit status 5: Volume group "some_pool_name" has insufficient free space ([X] extents): [X] required. v Guests that are create by Kimchi from a template have a maximum memory parameter (maxmemory) set to the lesser value of one of the following values: The host physical memory The value indicated in the template, multiplied by four After the guest is created, the maxmemory parameter will only change if, during a guest memory update, the host physical memory has decreased to a value that is lesser than what was previously configured as the maxmemory value. In this case, maxmemory becomes the host physical memory value for the guest. Copyright IBM Corp. 2014, 2015 1

Setting up a template using Kimchi You can use Kimchi to define templates for your guests. About this task To view the current templates available, start Kimchi and select Templates. From this page, you can perform the following actions: v Select Edit to edit the template. v Select Delete to delete the template. You can also add a template by following these steps: Procedure 1. On the Templates tab, click the plus (+) icon. 2. Select the location of the source media from the following options: Local ISO image Select to scan storage pools for installation ISO images available on the system. Remote ISO image Select to specify a remote location for an installation ISO image. If you selected Remote ISO image, the image is retrieved and loaded onto the system. If you selected Local ISO image, all ISO images available on the system are displayed. 3. To create a template from an ISO image, choose from the following options: v v Select an ISO image from which to create a template, then click Create Templates from Selected ISO. Select All to create a template from each listed ISO image, then click Create Templates from Selected ISO. If the ISO image that you want to use does not appear in the scan results, you can select from the following options: v v Select I want to use a specific ISO file to specify a path to the ISO image. Click Search more ISOs to search for more ISO images. 4. Click Create. Creating a guest with the virsh command The virsh command is a common command-line utility that can be used to create KVM guests. Run the virsh command as a single command or as a shell program. Most virsh commands require root privileges, although unprivileged users can use the virsh command for read-only operations. The virsh command uses the libvirt API. PowerKVM includes the libvirt package, but you can ensure that the libvirt daemon is running before you run virsh commands. To check the libvirt daemon status, run the following command: # systemctl status libvirtd If the service is running, output similar to this example is shown: libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled) Active: active (running) since Fri 2014-03-14 22:32:43 GMT; 4 days ago Main PID: 3517 (libvirtd) If the output displays inactive, then start the libvirt daemon by running this command: # systemctl start libvirtd 2 Linux: Configuring IBM PowerKVM on Power systems

You can use the virsh command to create a virtual machine by creating a libvirt XML file that describes the virtual machine and then running the virsh create command. To create a libvirt XML file, you can create the xml file or else export an existing one and edit it to the parameters that you want. You can find more information about libvirt at http://libvirt.org/formatdomain.html. Creating guests with the virt-install command The virt-install command is used to create new KVM guests. The virt-install command is useful for when you do not have access to a graphical desktop and, when given the necessary options, can run unattended. If some of the required options are omitted, the virt-install command runs interactively, prompting you for input when required. You must run the virt-install command as root. For more information about the options that you can use with the virt-install command, see http://linux.die.net/man/1/virt-install In this example, create a guest with name test, using two virtual processors and 4 GB of RAM, connecting to default network, and using an ISO file as installation source. virt-install --machine=pseries --name=test --virt-type=kvm --boot cdrom,hd --network=default,model=virtio --disk path=/var/lib/libvirt/images/test.qcow2,size=10,format=qcow2,bus=virtio,cache=none --memory=4096 --vcpu=2 --cdrom=/var/lib/libvirt/images/rhel6.5-20131111.0-server-ppc64-dvd1.iso Common virsh command options You can use the virsh command to manage several tasks for your KVM system. Note: As a rule, do not run any virsh commands as a background process as timeouts and errors can occur at unpredictable times. Table 1. Common virsh command options. Command option virsh connect virsh create xmlfile.xml virsh list --all virsh dumpxml guest_name virsh start guest_name virsh destroy guest_name virsh define xmlfile.xml virsh reboot guest_name virsh restore filename virsh resume guest_name virsh save guest_name filename Description Connect to the KVM hypervisor Creates and starts a guest from an XML configuration file. Lists all the guests on a host. Creates an XML configuration file of the guest as an output file. Starts an inactive guest. Immediately stops the guest. Creates a guest from an XML configuration file. The guest is not started. Restarts the guest Restores a guest from a saved file Resumes a guest that was paused Save the state of the guest to a file Configuring IBM PowerKVM on Power Systems 3

Table 1. Common virsh command options. (continued) Command option virsh suspend guest_name virsh undefine guest_name virsh undefine guest_name --remove-all-storage virsh nodeinfo virsh dominfo guest_name Description Pauses the guest Deletes the guest, but not the image file Deletes the guest and all the associated storage Displays information about the host Displays information about a guest PowerKVM storage Storage for PowerKVM consists of storage volumes and storage pools. A storage pool is file, directory, or storage device that is available to the guests. Storage pools are divided into volumes that are then assigned to guests. Common virsh command options - Storage You can use the virsh command to manage your storage in PowerKVM. Note: As a rule, do not run any virsh commands as a background process as timeouts and errors can occur at unpredictable times. Table 2. Common virsh command options: storage Command option virsh find-storage-pool-sources virsh pool-define-as pool_name path mountpoint virsh pool-list virsh pool-build pool_name virsh pool-start pool_name virsh pool-autostart pool_name virsh pool-info pool_name virsh vol-create-as pool_name vol_name size --format format_type virsh vol-list pool_name virsh vol-clone existing_vol_name new_vol_name --pool pool_name virsh vol-delete --pool pool_name vol_name virsh pool-destroy pool_name virsh pool-delete pool_name virsh pool-undefine pool_name Description Creates an XML definition file for all storage pools of a specific type. Create a storage pool. Provide the name, the path to the storage, and a mount point on the local system. Lists all storage pools. To include inactive storage pools, add --all. Creates a mount point for the storage pool. Starts the storage pool Causes the pool to be started every time that libvirt is started. To disable this option, run virsh pool-autostart pool_name --disable. Displays information about the pool. Creates a volume. Specify the pool where the volume is located, the name of the volume, size of the image (in K, M, G, T), and the format of the volume. Lists the volumes in a pool. To include inactive volumes, add --details. Copies and creates a volume in a storage pool. Deletes a volume from a storage pool. Stops a pool. Deletes a pool directory from the host. Removes the pool definition. 4 Linux: Configuring IBM PowerKVM on Power systems

Find storage pool sources with virsh Use the virsh command to discover storage pool sources on your system. Discovery of storage pool sources is supported for the following pool types: v logical v netfs (requires NFS server IP address or hostname) v iscsi (requires iscsi server details) find-storage-pool-sources type [srcspec] Returns XML describing all storage pools of a given type that could be found. For netfs and iscsi types, provide an XML containing the required details (srcspec). In this example, create a query to find NFS storage pool sources on server 192.168.122.3. Create an XML similar to the following: <source> <host name= 192.168.122.3 /> <dir path= /> <format type= nfs /> </source> Provide the XML as an argument in the command: virsh find-storage-pool-sources netfs nfs.xml This command returns output similar to the following example: <sources> <source> <host name= 192.168.122.3 /> <dir path= /nim/build_net /> <format type= nfs /> </source> <source> <host name= 192.168.122.3 /> <dir path= /kvm_lpm /> <format type= nfs /> </source> </sources> find-storage-pool-sources-as type [host] [port] [initiator] Returns XML describing all storage pools of a given type that could be found. For netfs and iscsi type, provide the host or other details as appropriate In this example, create a query to find NFS storage pool sources on server 192.168.122.3. virsh find-storage-pool-sources-as netfs 192.168.122.3 This command returns output similar to the following example: <sources> <source> <host name= 192.168.122.3 /> <dir path= /nim/build_net /> <format type= nfs /> </source> <source> <host name= 192.168.122.3 /> Configuring IBM PowerKVM on Power Systems 5

<dir path= /kvm_lpm /> <format type= nfs /> </source> </sources> Valid storage pools types You can create and use several types of storage pools with PowerKVM. Directory pool (DIR) Specifies a directory pool. When you use a directory pool, the file must exist and you must provide the path to the volume within that pool. Specify directory in the XML format looks similar to this format: <pool type="dir"> <name>kvmstorageimage</name> <target> <path>/var/lib/virt/images</path> </target> </pool> Network file system pool (NFS) Specifies an NFS file system pool as a type of exported storage. When you specify NFS, the file system is mounted and files are managed from its mount point. To use NFS file system pool, you need the host name and path of the exported directory. PowerKVM attempts to mount the network file system. Specify NFS in the XML format looks similar to this format: <pool type="netfs"> <name>kvmstorageimage</name> <source> <host name="nfs.example.com"/> <dir path="/var/lib/virt/images"/> <format type= nfs /> </source> <target> <path>/var/lib/virt/images</path> </target>; </pool> iscsi server (iscsi) Specifies a pool that is based on a target that is allocated on an iscsi server. A iscsi volume pool must exist.. Consider configuring the pool to use /dev/disk/by-path or /dev/disk/by-id for the target path. These provide persistent stable naming for LUNs. You can also add iscsi authentication. The iscsi authentication uses CHAP protocol. Specify the user name and password when you define the pool. Specify iscsi in the XML format looks similar to this format: <pool type="iscsi"> <name>kvmstorageimage</name> <source> <host name="isci.example.com"/> <device path="path2target" /> <auth type= chap username= userid > <secret usage= mypassword /> </auth> </source> <target> <path>/var/lib/virt/images</path> </target> </pool> 6 Linux: Configuring IBM PowerKVM on Power systems

Logical volume storage pool (Logical) Specifies a logical volume (LVM) storage pool. For an existing LVM group, use the group name to identify it. If you are creating a new LVM group, specify the source devices to use. You must also provide a path to the LVM pool. Specifying logical in the XML format looks similar to this format: <pool type="logical"> <name>volgroup</name> <source> <device path="/dev/sda1"/> <device path="/dev/sdb1"/> <device path="/dev/sdc1"/> </source> <target> <path>/dev/volgroup</path> </target> </pool> SCSI Fibre Channel Specifies a pool that is based on a SCSI Fibre Channel. The SCSI Fibre Channel volume pool must exist. Consider configure the pool to use /dev/disk/by-path or /dev/disk/by-id for the target path. These provide persistent stable naming for LUNs. Specifying SCSI in the XML format looks similar to this format: <pool type="scsi"> <name>kvmstorageimage</name> <source> <adapter name="host0"/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool> Storage pool Create a storage pool with the Kimchi interface or the virsh command. Setting up a storage pool with Kimchi You can use Kimchi to define storage pools for your guests. About this task To view the current storage pools available, start Kimchi and select Storage. From this page, you can perform the following actions: v Select Activate to activate the storage pool so that it can be used. v Select Deactivate to deactivate an active storage pool. v Select Undefine to remove an inactive storage pool. v Select the arrow at the end of a row to view details about a storage pool You can also define a storage pool by following these steps: Procedure 1. On the Storage tab, click the plus (+) icon. 2. In the Storage pool name field, type the name to be used to identify the storage pool. 3. In the Storage pool type list, select the type. You can choose: DIR Specifies a directory pool. When you select DIR, type the Storage path (file path to the storage pool). Configuring IBM PowerKVM on Power Systems 7

NFS iscsi Specifies a network file system pool. When you select NFS, type the NFS server IP address and NFS path (path of the exported directory). Specifies a pool that is based on a target that is allocated on an iscsi server. When you select iscsi, type the iscsi server IP address and Target on the iscsi server. You can optionally select to add iscsi authentication. Logical Specifies a logical volume storage pool. Select the location to the device in Device path. 4. Click Create. Note: You must create the storage pool using a partition. At this time, you cannot create a logical pool from an existing volume using Kimchi. Creating storage pool with virsh Create a storage pool with the virsh command and a temporary XML configuration file About this task To create a storage pool by defining a temporary XML configuration file and then using the virsh command to add it, follow these steps: Procedure 1. Create an XML file for the storage device. In this example, define an iscsi device with authentication. <pool type="iscsi"> <name>storage_image</name> <source> <host name="isci.example.com"/> <device path="path2target" /> <auth type= chap username= userid > <secret usage= mypassword /> </auth> </source> <target> <path>/dev/disk/by-path</path> </target> </pool> 2. Add the XML file to the storage definition: virsh pool-define ~/storage_image.xml 3. Verify that the pool was created: virsh pool-list -all Name State Autostart ----------------------------------------- default active yes storage_images inactive no 4. Start the storage pool: virsh pool-start storage_images Pool storage-images started 5. Verify that the pool started: virsh pool-list --all Pool storage-images started Name State Autostart ----------------------------------------- default active yes storage_images active no 6. Optional: Turn on autostart for the storage pool: 8 Linux: Configuring IBM PowerKVM on Power systems

virsh pool-autostart storage_images Pool storage-images marked as autostarted. 7. Verify that the storage pool was created correctly and is running: virsh pool-info storage_images Name: storage_images UUID: afcc5367-6770-e151-bcb3-847bc76c5e28 State: running Persistent: unknown Autostart: yes Capacity: 115.53 GB Allocation: 0.00 Available: 115.53 GB PowerKVM Logical Volume Manager The Logical Volume Manager (LVM) is a tool that groups physical storage into chunks of logical storage. Grouping physical storage into chunks of logical storage provides much greater flexibility than using physical storage directly. With a logical volume, you are not restricted to physical disk sizes. In addition, the hardware storage configuration is hidden from the software so it can be resized and moved without stopping applications or unmounting file systems. This can reduce operational costs. To create an LVM logical volume, you must combine the physical volumes into a volume group (VG). This creates a pool of disk space out of which you can allocate LVM logical volumes (LVs). This process is analogous to how disks are divided into partitions. A logical volume is used by file systems and applications such as databases. Logical Volume Manager components This section describes the components of the Logical Volume Manager (LVM) logical volumes (LVs). Physical Volumes The underlying physical storage unit of an LVM logical volume is a block device such as a partition or whole disk. To use the device for an LVM logical volume, you must initialize the device as a physical volume (PV). A label is then placed near the start of the device. An LVM label provides correct identification and ordering for a physical device, since devices can come up in any order when the system is starting. An LVM label remains persistent across restarts and throughout a cluster. Volume Groups Physical volumes are combined into volume groups (VGs) to create a pool of disk space out of which you can allocate logical volumes. Within a volume group, the disk space that is available for allocation is divided into units of a fixed-size called extents. Within a physical volume, extents are referred to as physical extents. A logical volume is allocated into logical extents of the same size as the physical extents. The volume group maps the logical extents to physical extents. LVM Logical Volumes In LVM, a volume group is divided up into logical volumes. The following list describes the different types of logical volumes. Configuring IBM PowerKVM on Power Systems 9

Linear Volumes A linear volume aggregates space from one or more physical volumes into one logical volume. For example, if you have two 70 GB disks, you can create a 140 GB logical volume. Striped Logical Volumes When you write data to an LVM logical volume, the file system lays out the data across the underlying physical volumes. You can control the way that the data is written to the physical volumes by creating a striped logical volume. RAID Logical Volumes LVM supports RAID 1/4/5/6/10. RAID logical volumes are not cluster-aware. While RAID logical volumes can be created and activated exclusively on one machine, they cannot be activated simultaneously on more than one machine. Thinly Provisioned Logical Volumes (Thin Volumes) Logical volumes can be thinly provisioned to create logical volumes that are larger than the available extents. Using thin provisioning, you can allocate the storage pool of free space to an arbitrary number of devices when needed by applications. You can then create devices that can be bound to the thin pool for later allocation when an application writes to the logical volume. Snapshot Volumes The LVM snapshot feature creates virtual images of a device at a particular instant without causing a service interruption. When a change is made to the original device after a snapshot is taken, the snapshot feature makes a copy of the changed data area as it was before the change so that it can reconstruct the state of the device. Thinly Provisioned Snapshot Volumes Thin snapshot volumes allow many virtual devices to be stored on the same data volume. This simplifies administration and allows for the sharing of data between snapshot volumes. Logical Volume Manager commands This section describes the commands and examples that you can use for the Logical Volume Manager (LVM). Table 3. LVM commands. Command name lvcreate lvchange lvconvert lvmdiskscan Description Creates a logical volume. For example, the following command creates a 10 GB logical volume in the volume group vg1. lvcreate -L 10G vg1 Modifies logical volume parameters. For example, the following command changes the permission for volume lvol1 in volume group vg00 to be read-only. lvchange -pr vg00/lvol1 Converts an existing linear logical volume to a RAID device. For example, the following command converts the linear logical volume my_lv in volume group my_vg to a 2-way RAID 1 array. lvconvert --type raid1 -m 1 my_vg/my_lv Scans for block devices that might be used as physical volumes. For example, when you run the lvmdiskscan command, output similar to the following is displayed. /dev/ram0 [ 16.00 MB] /dev/sda [ 17.15 GB] /dev/root [ 13.69 GB] /dev/ram [ 16.00 MB] /dev/sda1 [ 17.14 GB] LVM physical volume 10 Linux: Configuring IBM PowerKVM on Power systems

Table 3. LVM commands. (continued) Command name lvdisplay lvextend lvreduce lvremove Description Displays properties of LVM logical volumes. This command displays logical volume properties such as size, layout, and mapping in a fixed format. For example, the following command displays the attributes of lvol5 in vg3. If snapshot logical volumes have been created for this original logical volume, this command displays a list of all snapshot logical volumes and their status (active or inactive). lvdisplay -v /dev/vg3/lvol5 Extends the size of a thin volume. For example, the following command extends the logical volume /dev/myvg/vol2 to 12 GB. lvextend -L12G /dev/myvg/vol2 Reduces the size of a logical volume. For example, the following example reduces the size of logical volume lvol4 in volume group vg1 by 3 logical extents. lvreduce -l -3 vg1/lvol4 Removes a logical volume. For example, when you run the lvremove /dev/vgex/lvex command, the logical volume /dev/testvg/lvex is removed from the volume group vgex. The logical volume has not been deactivated. lvs lvscan pvchange pvcreate Output similar to the following is displayed. Do you really want to remove active logical volume "lvex"? [y/n]: y Logical volume "lvex" successfully removed Displays properties of LVM logical volumes. This command provides logical volume information in a configurable form, displaying one line per logical volume. For example, when you run the lvs command, output similar to the following is displayed. LV VG Attr LSize Origin Snap% Move Log Copy% Convert lvol1 vg1 owi-a- 51.00M newvg1 vg1 swi-a- 3.00M lvol1 0.20 Displays properties of LVM logical volumes. This command scans for all logical volumes in the system and lists them. For example, when you run the lvscan, output similar to the following is displayed. ACTIVE /dev/vg2/lv2 [1.45 GB] inherit Modifies physical volume properties. For example, the following command does not allow the allocation of physical extents on /dev/sdk4. pvchange -x n /dev/sdk4 Initializes a block device to be used as a physical volume. For example, the following command initializes /dev/sda, /dev/sdb, and /dev/sdc as LVM physical volumes. pvcreate /dev/sda /dev/sdb /dev/sdc Configuring IBM PowerKVM on Power Systems 11

Table 3. LVM commands. (continued) Command name pvdisplay pvmove pvremove pvresize pvs pvscan vgcfgrestore vgchange Description Displays properties of LVM physical volumes. This command provides a verbose multi-line output for each physical volume. It displays physical properties such as size, extents, and volume group in a fixed format. For example, the following example shows the output of the pvdisplay command for a single physical volume. --- Physical volume --- PV Name /dev/sdc2 VG Name vg1 PV Size 13.14 GB / not usable 2.40 MB Allocatable yes PE Size (KByte) 4095 Total PE 4386 Free PE 4376 Allocated PE 13 PV UUID Joqsdh-yRDj-kuFn-RdwM-01R9-XO8B-mcpcFe Migrates the data of one physical volume to another physical volume. For example, the following command moves all allocated space off the physical volume /dev/sda1 to other free physical volumes in the volume group. pvmove /dev/sda1 Removes the LVM label if a device is no longer required. For example, when you run the pvremove /dev/ram16 command, output similar to the following is displayed. Labels on physical volume "/dev/ram16" successfully wiped Changes the size of an underlying block device. You can run this command when the LVM is using the physical volume. Displays properties of LVM physical volumes in a configurable form, displaying one line per physical volume. For example, the following output is the displayed by default when you run the pvs command. PV VG Fmt Attr PSize PFree /dev/sda1 vg1 lvm2 a- 12.14G 12.14G /dev/sdb1 vg1 lvm2 a- 12.14G 12.09G /dev/sdc1 vg1 lvm2 a- 12.14G 12.14G Displays properties of LVM physical volumes. This command scans all supported LVM block devices in the system for physical volumes. For example, when you run the pvscan command, all physical devices that are found are displayed. PV /dev/sda2 VG vg1 lvm2 [964.00 MB / 0 free] PV /dev/sdb1 VG vg1 lvm2 [964.00 MB / 428.00 MB free] PV /dev/sdc2 lvm3 [964.84 MB] Total: 3 [2.83 GB] / in use: 2 [1.88 GB] / in no VG: 1 [964.84 MB] Restores the metadata of a volume group from the archive to all the physical volumes in the volume groups. For example, when you run the vgcfgrestore VG command, output similar to the following is displayed. Restored volume group VG Changes volume group properties. For example, checks whether an existing volume group is local or clustered. For example, the following example deactivates the volume group my_volume_group1. vgchange -a n my_volume_group1 12 Linux: Configuring IBM PowerKVM on Power systems

Table 3. LVM commands. (continued) Command name vgcreate vgdisplay vgexport vgimport vgextend vgmerge vgmknodes vgreduce vgremove Description Creates a volume group from one or more physical volumes. This command creates a new volume group by name and adds at least one physical volume to it. You can also create volume groups in a cluster environment with this command, just as you create them on a single node. For example, the following command creates a volume group named vg1 that contains physical volumes /dev/sda1 and /dev/sdb1. vgcreate vg1 /dev/sda1 /dev/sdb1 Displays properties of LVM volume groups. This command displays volume group properties such as size, extents, and number of physical volumes in a fixed form. For example, the following example shows the output when you run the vgdisplay new_vg1 command. --- Volume group --- VG Name new_vg1 System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 11 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 3 Act PV 3 VG Size 51.42 GB PE Size 4.00 MB Total PE 13164 Alloc PE / Size 13 / 52.00 MB Free PE / Size 13151 / 51.37 GB VG UUID jxdj0a-xkk0-opvo-0118-nlwo-wwqd-fe5d32 Makes an inactive volume group inaccessible to the system, which allows you to detach its physical volumes. Makes a volume group accessible to a system again after you run the vgexport command to made it inactive. Adds more physical volumes to an existing volume group. For example, the following command adds the physical volume /dev/sda1 to the volume group vg2. vgextend vg2 /dev/sda1 Combines two volume groups into a single volume group. For example, the following command merges the inactive volume group my_vg1 into the active or inactive volume group databases1 giving verbose runtime information. vgmerge -v databases1 my_vg1 Re-creates a volume group directory and logical volume special files. Removes unused physical volumes from a volume group. For example, the following command removes the physical volume /dev/hdb1 from the volume group my_volume_group1. # vgreduce my_volume_group1 /dev/hdb1 Removes a volume group that contains no logical volumes. For example, when you run the vgremove vg1 command, output similar to the following is displayed. Volume group "vg1" successfully removed Configuring IBM PowerKVM on Power Systems 13

Table 3. LVM commands. (continued) Command name vgrename vgs vgscan vgsplit Description Renames an existing volume group. For example, the following command renames the existing volume group vg3 to my_volume_group1. vgrename /dev/vg3 /dev/my_volume_group1 Provides volume group information in a configurable form, displaying one line per volume group. For example, when you run the vgs command, the attributes of the volume groups VolGroup01 and vg1 are displayed. VG #PV #LV #SN Attr VSize VFree VolGroup01 1 2 0 wz--n- 19.88G 0 vg1 1 1 0 wz--nc 46.00G 8.00M Scans all the disks for volume groups and rebuilds the LVM cache file, also displays the volume groups. For example, when you run the vgscan command., output similar to the following is displayed. Reading all physical volumes. This may take a while... Found volume group "new_vg1" using metadata type lvm3 Found volume group "vg1" using metadata type lvm3 Splits the physical volumes of a volume group and creates a new volume group. For example, when you run the vgsplit minivg microvg /dev/ram15 command, the new volume group microvg is split from the original volume group minivg. Output similar to the following is displayed. Volume group "microvg" successfully split from "minivg" Setting up the Logical Volume Manager logical volumes This section describes how you can set up the Logical Volume Manager (LVM) logical volumes by using the LVM command line interface. To set up the LVM logical volumes, you must first create physical volumes, volume groups, and then create logical volumes. For more information about other administrative tasks for physical volumes, volume groups, and logical volumes, see LVM Administration With CLI Commands. Creating physical volumes: You can create physical volumes by using the LVM command line interface. Procedure 1. If you are using a whole disk device for your physical volume, ensure that the disk has no partition table. For DOS disk partitions, set the partition ID to 0x8e by running the fdisk or cfdisk command. For whole disk devices only, erase the partition table to remove all data on that disk. You can remove an existing partition table by zeroing the first sector with the following command. dd if=/dev/zero of=physicalvolume bs=512 count=1 2. Run the pvcreate command to initialize a block device to be used as a physical volume. Initialization is analogous to formatting a file system. For example, the following command initializes /dev/sdd, /dev/sde, and /dev/sdf as LVM physical volumes for use as part of the LVM logical volumes. # pvcreate /dev/sdd /dev/sde /dev/sdf 3. Optional: To initialize partitions rather than whole disks, run the pvcreate command on the partition. The following example initializes the partition /dev/hdb1 as an LVM physical volume for use as part of an LVM logical volume. # pvcreate /dev/hdb1 14 Linux: Configuring IBM PowerKVM on Power systems

Replacing physical volumes that are missing: Sometimes, a physical volume might fail or might need to be replaced. You can label a new physical volume to replace it. Procedure If a physical volume fails or needs to be replaced, you can label a new physical volume to replace the lost physical volume in the existing volume group. To perform this action. perform the procedure to recover physical volume metadata that is described in Recovering metadata of physical volumes on page 17 Note: v To display the UUIDs and sizes of the missing physical volumes, use the --partial and --verbose arguments with the vgdisplay command. v You can replace another physical volume of the same size by running the pvcreate command with the --restorefile and --uuid arguments and initializing a new device with the same UUID as the missing physical volume. Then, run the vgcfgrestore command to restore the metadata of the volume group. Removing lost physical volumes from a volume group: When physical volumes are lost, you can remove them from the volume group. About this task If you lose a physical volume, you can remove all the logical volumes that used that physical volume from the volume group. Procedure 1. If you lose a physical volume, you can activate the remaining physical volumes in the volume group with the --partial argument of the vgchange command. You can remove all the logical volumes that used that physical volume from the volume group with the --removemissing argument of the vgreduce command. 2. Run the vgreduce command with the --test argument to ensure that you can verify what you are deleting. Note: The results of running the vgreduce command are reversible. This means that if you use the vgcfgrestore command immediately, you can restore the metadata of the volume group to its previous state. Creating volume groups: You can create volume groups by using the LVM command line interface. About this task To create a volume group from one or more physical volumes, run the vgcreate command. The vgcreate command creates a new volume group by name and adds at least one physical volume to it. The following command creates a volume group vg1 that contains physical volumes /dev/sdd1 and /dev/sde1. vgcreate vg1 /dev/sdd1 /dev/sde1 When physical volumes are used to create a volume group, its disk space is divided into 4MB extents. This extent is the minimum amount by which size of the logical volume may be increased or decreased. Configuring IBM PowerKVM on Power Systems 15

You can specify the extent size with the -s option to the vgcreate command. You can also specify limits on the number of physical or logical volumes the volume group can have by using the -p and -l arguments of the vgcreate command. You can also create volume groups in a cluster environment with the vgcreate command, just as you create them on a single node. When you run the following command in a cluster environment, a volume group that is local to the node from which the command was run is created. The command creates a local volume vg1 that contains physical volumes /dev/sdd1 and /dev/sde1. vgcreate -c n vg1 /dev/sdd1 /dev/sde1 Creating logical volumes: You can create logical volumes by using the LVM command line interface. About this task To create a logical volume, use the lvcreate command. If you do not specify a name for the logical volume, the default name lvol# is used where # is the internal number of the logical volume. When you create a logical volume, the logical volume is carved from a volume group by using the free extents on the physical volumes that make up the volume group. Logical volumes use up any space available on the underlying physical volumes on a next-free basis. Modifying the logical volume frees and reallocates space in the physical volumes. The following command creates a logical volume 10 gigabyte in size in the volume group vg1. lvcreate -L 10G vg1 The following command creates a 1500 MB linear logical volume named testlv in the volume group testvg, creating the block device/dev/testvg/testlv. lvcreate -L 1500 -n testlv testvg Logical Volume Manager troubleshooting This section describes how to troubleshoot the Logical Volume Manager (LVM) issues. When a command is not executing as you expect, obtain diagnostics in the following ways: v You can use the -v, -vv, -vvv, or -vvvv argument of any command for increasing the verbose levels of output. v If the issue is related to the logical volume activation, set activation = 1 in the log section of the configuration file. Then, run the command with the -vvvv argument. After you check this output, ensure to reset this parameter to 0 to avoid possible system locking issues when memory is low. v Run the lvmdump command to display information memory dump for diagnostic purposes. v Run the lvs -v, pvs -a, or dmsetup info -c command for more system information. v Check the last backup of the metadata in the /etc/lvm/backup file and archived versions in the /etc/lvm/archive file. v Run the lvmconfig command to verify the current configuration information. v Check the.cache file in the /etc/lvm directory for information about devices that contain physical volumes. 16 Linux: Configuring IBM PowerKVM on Power systems

Displaying information on failed devices: When devices fail, you can display information about the problem. Symptoms Sometimes, a volume might fail. However, the issue is not displayed in the output. Resolving the problem Use the -P argument with the lvs or vgs command to display information that is related to the failed volume. For example, if one of the devices in the volume group vg1 fails, running the vgs command might display the following output. For example, vgs -o +devices Volume group "vg1" not found When you specify the -P argument when you run the vgs command, even though you cannot use the volume group, you can still see more information about the failed device. vgs -P -o +devices Partial mode. Incomplete volume groups will be activated read-only. VG #PV #LV #SN Attr VSize VFree Devices vg1 9 2 0 rz-pn- 3.11T 3.07T unknown device(0) vg1 9 2 0 rz-pn- 3.11T 3.07T unknown device(5130),/dev/sda1(0) In this example, the failed device caused both a linear and a striped logical volume in the volume group to fail. When you use the -P argument, the logical volumes that have failed are also displayed. lvs -P -a -o +devices Partial mode. Incomplete volume groups will be activated read-only. LV VG Attr LSize Origin Snap% Move Log Copy% Devices linear vg1 -wi-a- 20.00G unknown device(0) stripe vg1 -wi-a- 20.00G unknown device(5130),/dev/sda1(0) Recovering metadata of physical volumes: Sometimes, the volume group metadata area of a physical volume could be overwritten or deleted. An error message indicating that the metadata area is incorrect, or that the system was unable to find a physical volume with a particular UUID is displayed. Symptoms The following example shows the kind of output that might be displayed if the metadata area is missing or corrupted. # lvs -a -o +devices Couldn t find device with uuid FdfDh3-zhog-iVI8-7qTD-S5BI-HSURYYM5Sk. Couldn t find all physical volumes for volume group VG. Couldn t find device with uuid FdfDh3-zhog-iVI8-7qTD-S5BI-HSURYYM5Sk. Couldn t find all physical volumes for volume group VG. Resolving the problem You can recover the data from the physical volume by writing a new metadata area on the physical volume and specifying the same UUID as the lost metadata. Note: Do not attempt this procedure with a working Logical Volume Manager (LVM) logical volume. Data would be lost if you specify the incorrect UUID. Configuring IBM PowerKVM on Power Systems 17

1. You can find the UUID of the physical volume that was overwritten in the /etc/lvm/archive directory. Check the VolumeGroupName_xxxx.vg file for the last known valid archived LVM metadata for that volume group. Alternately, you can deactivate the volume and set the partial (-P) argument to obtain the UUID of the physical volume that is missing or corrupted. # vgchange -an --partial Partial mode. Incomplete volume groups will be activated read-only. Couldn t find device with uuid FdfDh3-zhog-iVI8-7qTD-S5BI-HSURYYM5Sk. Couldn t find device with uuid FdfDh3-zhog-iVI8-7qTD-S5BI-HSURYYM5Sk.... 2. Use the --uuid and --restorefile arguments with the pvcreate command to restore the physical volume. The pvcreate command overwrites only the LVM metadata areas and does not affect the existing data areas. # pvcreate --uuid "FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk" -- restorefile /etc/lvm/archive/vg_00050.vg /dev/sdh1 Physical volume "/dev/sdh1" successfully created Where, /dev/sdh1 is the physical volume with the UUID FdfDh3-zhog-iVI8-7qTD-S5BI-HSURYYM5Sk VG_00050.vg is used to take the most recent good archived metadata 3. Run the vgcfgrestore command to restore the metadata of the volume group. vgcfgrestore VG Restored volume group VG 4. Display the logical volumes. # lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices stripe VG -wi--- 300.00G /dev/sdh1 (0),/dev/sda1(0) stripe VG -wi--- 300.00G /dev/sdh1 (34728),/dev/sdb1(0) 5. Activate the volumes and display the active volumes. # lvchange -ay /dev/vg/stripe [root@ link-07 backup]# lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices stripe VG -wi-a- 300.00G /dev/sdh1 (0),/dev/sda1(0) stripe VG -wi-a- 300.00G /dev/sdh1 (34728),/dev/sdb1(0) If the on-disk LVM metadata used at least as much space as the metadata that overrode it, this command can recover the physical volume. If the overriding metadata used more space than the metadata area, the data on the volume might be affected. You can run the fsck command to recover that data. Handling errors that indicate insufficient free extents: When you create a logical volume, you get an error that indicates insufficient free extents. Symptoms You might get an error that indicates insufficient free extents when you create a logical volume though enough extents exist when you run the vgdisplay or vgs commands. Resolving the problem This error might occur because the vgdisplay or vgs commands round figures to two decimal places to provide human-readable output. 1. To determine the size of the logical volume, specify the exact size by using free physical extent count instead of multiples of bytes. Run one of the following commands: 18 Linux: Configuring IBM PowerKVM on Power systems

v v Run the vgdisplay command and check the output for the free physical extents. For example, vgdisplay --- Volume group ---... Free PE / Size 8790 / 34.30 GB Run the vgs command with the vg_free_count and vg_extent_count arguments to display the free extents and the total number of extents. vgs -o + vg_free_count,vg_extent_count VG #PV #LV #SN Attr VSize VFree Free #Ext testvg 2 0 0 wz--n- 34.30G 34.30G 8790 8790 2. Run the following command by using the l argument to use extents instead of bytes: lvcreate -l 8790 -n testlv1 testvg1 This command uses the free extents in the volume group. vgs -o + vg_free_count,vg_extent_count VG #PV #LV #SN Attr VSize VFree Free #Ext testvg1 2 1 0 wz--n- 34.30G 0 0 8790 Handling warnings that indicate duplicate PV for multipathed devices: When you list a volume group or a logical volume with the Logical Volume Manager (LVM) having multipathed storage, you might receive duplicate PV warnings. Symptoms When you use LVM with multipathed storage, and run commands such as vgs or lvchange to list a volume group or logical volume, messages similar to the following might be displayed: Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/dm-5 not /dev/sdd Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowerb not /dev/sde Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sddlmab not /dev/sdf Causes The logical unit numbers might have multiple device nodes that point to the same underlying data. When you run the LVM commands, the same metadata is found multiple times and reported as duplicates. These duplicate messages might be warnings and do not mean that the LVM operation has failed. Only one of the devices might be used as a physical volume and the others might be ignored. Resolving the problem The following are the scenarios where you might face this problem. The subsequent topics describe how you can resolve the problem for these scenarios. v The two devices that are displayed in the output are both single paths to the same device v The two devices that are displayed in the output are both multipath maps Handling duplicate PV warnings for single paths: When you list a volume group or a logical volume with the Logical Volume Manager (LVM) having multipathed storage, you might receive a duplicate PV warning where the duplicate devices are both single paths to the same device. Symptoms The following example shows a duplicate PV warning where the duplicate devices are both single paths to the same device. Both /dev/sdd and /dev/sdf are displayed under the same multipath map when you run the multipath -ll command. Configuring IBM PowerKVM on Power Systems 19