NAME lvm - Logical Volume Manager (LVM) DESCRIPTION The Logical Volume Manager (LVM) is a subsystem for managing disk space. The HP LVM subsystem offers value-added features, such as mirroring (with the optional HP MirrorDisk/UX software), high availability (with the optional HP Serviceguard software), and striping, that enhance availability and performance. LVM also provides the snapshot feature to create point-in-time images (snapshots) of LVM logical volumes. Unlike earlier arrangements where disks were divided into fixed-sized sections, LVM allows the user to consider the disks, also known as physical volumes, as a pool (or volume) of data storage, consisting of equal-sized extents. The size of an extent can vary from 1 MB to 256 MB. An LVM system consists of arbitrary groupings of physical volumes, organized into volume groups. A volume group can consist of one or more physical volumes. There can be more than one volume group in the system. Once created, the volume group, and not the disk, is the basic unit of data storage. Thus, where as earlier one would move disks from one system to another, with LVM, one would move a volume group from one system to another. For this reason it is often convenient to have multiple volume groups on a system. Volume groups can be subdivided into virtual disks, called logical volumes. A logical volume can span a number of physical volumes or represent only a portion of one physical volume. The pool of disk space that is represented by a volume group can be apportioned into logical volumes of various sizes. The size of a logical volume is determined by its number of extents. Once created, logical volumes can be treated just like disk partitions. Logical volumes can be assigned to file systems, used as swap or dump devices, or used for raw access. Commands LVM information can be created, displayed, and manipulated with the following commands: lvchange Change logical volume characteristics lvcreate Stripe, create logical volume in volume group lvdisplay Display information about logical volumes lvextend Increase space, increase mirrors for logical volume lvlnboot Prepare logical volume to be root, primary swap, or dump volume lvmadm Display limits associated with a volume group version lvmove Move a logical volume within a volume group lvreduce Decrease number of physical extents allocated to logical volume lvremove Remove one or more logical volumes from volume group lvrmboot Remove logical volume link to root, primary swap, or dump volume pvchange Change characteristics of physical volume in volume group pvcreate Create physical volume for use in volume group pvdisplay Display information about physical volumes within volume group pvmove Move allocated physical extents from one physical volume to other physical volumes vgcfgbackup Create or update volume group configuration backup file vgcfgrestore Display or restore volume group configuration from backup file vgchange Set volume group availability vgcreate Create volume group vgdisplay Display information about volume groups vgexport Export a volume group and its associated logical volumes vgextend Extend a volume group by adding physical volumes vgimport Import a volume group onto the system vgmodify Modify volume group attributes vgmove Move data from old set of disks to a new set of disks vgreduce Remove physical volumes from a volume group vgremove Remove volume group definition from the system vgscan Scan physical volumes for volume groups vgversion Migrate a volume group from one volume group version to another The following commands are also available if the HP MirrorDisk/UX software is installed: lvmerge Merge two logical volumes into one logical volume lvsplit Split mirrored logical volume into two logical volumes HP-UX 11i Version 3: October 2010 1 Hewlett-Packard Company 1
lvsync vgsync Synchronize stale mirrors in logical volumes Synchronize stale logical volume mirrors in volume groups Device Special Files Starting with HP-UX 11i Version 3, the Mass Storage Stack supports two naming conventions for the device special files used to identify devices (see intro (7)). Devices can be represented using: Persistent device special files, (/dev/disk/disk3), or Legacy device special file names, (/dev/dsk/c0t6d6). While LVM supports the use of both conventions within the same volume group, the examples shown in the LVM man pages are all using the legacy device special file convention. Alternate Links (PVLinks) In this release of HP-UX, LVM continues to support Alternate Links to a device to allow continued access to the device, if the primary link fails. This multiple link or multipath solution increases data availability, but continues disallowing the simultaneous use of multiple paths. A new feature was introduced in the Mass Storage Subsystem on HP-UX 11i Version 3 that supports multiple paths to a device and allows simultaneous access to these paths. The Mass Storage Subsystem will balance the I/O load across the valid paths. Multipathing is the default unless the scsimgr command is used to enable legacy multipathing and also the active path is a legacy device special file. See scsimgr(1m) for details. Even though the Mass Storage Subsystem supports 32 multiple paths per physical volume on this version of HP-UX, LVM does not support more than eight paths to any physical volume. As a result, commands like vgcreate and vgextend will not succeed in adding more than eight paths per physical volume. Additionally, vgimport and vgscan cannot write more than eight paths per physical volume in the /etc/lvmtab or /etc/lvmtab_p files. If users want to use any specific path other than these eight paths, then they have to vgreduce one of the alternate paths in the volume group and add that specific path using vgextend. It is no longer required or recommended to configure LVM with alternate links. However, it is possible to maintain the traditional LVM behavior. To do so, both of the following criteria must be met: Only the legacy device special file naming convention is used in the volume group configuration. The scsimgr command is used to enable the legacy multipath behavior for each physical volume in the volume group. LVM s Volume Group Versions 1.0, 2.0, 2.1, and 2.2 LVM now has four different volume group version, 1.0, 2.0, 2.1, and 2.2. The original version of LVM volume group is 1.0. Versions 2.0, 2.1, and 2.2 volume groups allow LVM to increase many of the limits constraining the size of volume groups, logical volumes, and so on. Version 2.2 volume groups have the same limits as version 2.1 volume groups. To see a comparison of limits for volume groups version 1.0, 2.0, 2.1, and 2.2, use the lvmadm command (see lvmadm(1m)). Version 2.2 volume groups support boot volume groups and snapshot logical volumes (see lvlnboot (1M) and lvcreate (1M) for more details). The procedures and command syntax for managing volume groups version 1.0 is unchanged. To take advantage of the improvements in volume groups version 2.0 or higher, a volume group is declared to be version 2.0, 2.1, or 2.2 at creation time using the new -V option to the vgcreate command. The vgcreate command will create the volume group directory and group file if they do not already exist. This is independent of the volume group version. There are several differences in the procedure for creating a volume group which is to be version 2.0 or higher. The volume group directory and group file will have a different major/minor number combination. See vgcreate (1M) for details. It is no longer necessary to set maximums for physical volumes, logical volumes, or extents per physical volume. Instead the vgcreate command expects a maximum size for the volume group. This size of a volume group is the sum of the user data space on all physical volumes assigned to the volume group. 2 Hewlett-Packard Company 2 HP-UX 11i Version 3: October 2010
Extent size is now a required parameter. For volume groups version 1.0, the default extent size is 4MB. For volume groups version 2.0 or higher, extent size must be specified. Volume group versions 2.0 and 2.1 do not support root, boot, swap, or dump. Volume group versions 2.2 and higher do support root, boot, swap, and dump. Volume groups version 2.0 or higher do not support spare physical volumes. The maximum number of 1.0 version volume groups per system is 256. The maximum number of 2.0 version volume groups per system is 512. The maximum combined 2.0, 2.1, and 2.2 volume groups is 2048. The vgversion (1M) command allows the migration between any two supported volume group versions, with the exception of moving back to version 1.0. Extent Sizing for Volume Group Version 2.0 and Higher In volume groups version 1.0, LVM metadata is required to fit into a single physical extent. If large values for maximum physical volumes, logical volumes, and extents per physical volume were chosen, then a large extent size is required. In volume groups version 2.0 and higher, metadata is not restricted to an extent. There is an implementation limit to the number of extents in a volume group (see lvmadm(1m)), so the larger the extent size the larger the maximum volume group size which can be supported. The amount of space taken up on each physical volume by LVM metadata is dependent on the physical extent size and the maximum volume group size specified when the volume group is created. LVM metadata for volume groups version 2.0 and higher may consume more space than on volume groups version 1.0. The vgcreate command has a new option (-E) which will show the relationship between extent size and maximum volume group size. A smaller extent size allows finer granularity in assigning space to logical volumes. It also means that smaller blocks of data are marked stale when IOs to a mirror copy fail. For small logical and physical volumes, a smaller extent size may result in less wasted space. Since there are limits to the number of extents in a logical or physical volume, a small extent size will limit the total size of a logical or physical volume. Conversely a larger extent size allows creation of larger logical volumes and use of larger physical volumes. Auto Boot Disk Migration This feature is intended to allow users to configure how LVM handles situations where the physical location of the boot disk changes between reboots. This situation can occur during hardware configuration changes or if boot disk images are cloned. In those situations, Auto Migration of Boot Disk will automatically update stale configuration entries for the root volume group in LVM configuration files (/etc/lvmtab or /etc/lvmtab_p) and the Boot Data Reserved Areas for each bootable physical volume in the root volume group. The configuration files are synchronized with the information from the kernel at the time of boot. The Auto Boot Disk Migration feature (defined by the AUTO_BOOT_MIGRATE flag in the /etc/lvmrc file) is turned on by default on the system. When the feature is turned on, any mismatch between the /etc/lvmtab or /etc/lvmtab_p entries and the on-disk metadata structures for the root volume group in the kernel will be automatically fixed during the boot process. The Auto Boot Disk Migration feature can be turned off by editing the /etc/lvmrc file and setting the flag AUTO_BOOT_MIGRATE to 0. In those situations, users need to check the syslog file post boot activity and follow the instructions logged to the file, if any. Snapshots for Volume Group Version 2.2 and Higher A snapshot represents a point-in-time image of a logical volume. Multiple snapshots can be created off a single LVM logical volume. LVM snapshots let you do the following: Use snapshots to back up data on the logical volume without splitting the logical volume. This reduces the space requirement. Create snapshots faster than manually copying over multiple point-in-time copies of the logical volume. Create multiple snapshots of a logical volume. This enables you to have images of the logical volume for multiple point in time without allocating space equivalent to the entire size of the logical volume per copy. HP-UX 11i Version 3: October 2010 3 Hewlett-Packard Company 3
The logical volume and all of its snapshots together form a snapshot tree. The logical volume of which the snapshots were taken is referred to as the "original" logical volume. Only a single snapshot can be created at a time, which means that one cannot create point-in-time copies of more than one logical volume at a time. Also, snapshots can only be created off normal logical volumes, and not off snapshot logical volumes. On the snapshot tree, the original logical volume and its snapshot logical volumes maintain a successorpredecessor relationship with each other. When a snapshot is created off a logical volume, on the snapshot tree, the original logical volume is the successor of the snapshot logical volume and the snapshot is the predecessor of the original volume. Consider a logical volume lv1 with a snapshot s1. When the next snapshot s2 is created, s2 would be the new successor for the s1 snapshot, and it would be the new predecessor for the original logical volume lv1. When one more snapshot s3 is created, the snapshot tree will be represented as follows, with the arrow pointing to the successor: lv1 <- s3 <- s2 <- s1 When created, a snapshot shares all its data with that of the original logical volume. The snapshot gets a copy of its own data only when a write (copy before write) occurs onto itself or its successor. This process is referred to as data unsharing. In order to support snapshots in volume groups 2.2 and higher, LVM introduced a new configuration parameter, unshare unit (with the -U option in vgcreate). The unshare unit represents the smallest unit at which data can be unshared between a logical volume and its snapshots. The unshare unit can be configured only during creation of a volume group. See vgcreate (1M) for details. Two types of snapshots are supported. Fully-Allocated Snapshot When a fully allocated snapshot is created, the number of extents required for the snapshot is allocated immediately, just like for a normal logical volume. However, the data contained in the original logical volume is not copied over to these extents. The copying of data occurs through the data unsharing process. Space-Efficient Snapshot When a space-efficient snapshot is created, the user is expected to specify the number of extents that LVM needs to set aside to be used for unsharing in the future. These extents are referred to as preallocated extents. Refer to lvcreate (1M) and lvextend (1M) for details. After the snapshot creation, the user can further increase the number of pre-allocated extents. When the number of extents in the pre-allocated extent pool falls below a certain threshold, a message is logged in the system s syslog, and an event is published to the listening subsystems. If the automatic increase of pre-allocated extents is enabled, then the number of pre-allocated extents gets automatically incremented by the threshold value. The threshold value can be set or changed using lvcreate and lvchange commands. See lvcreate (1M) and lvchange (1M) for more information. Note that lvmpud daemon must be running for this to succeed. Please refer to lvmpud(1m) for more information on the daemon. For default allocation policy, the threshold is reached when the number of free extents in the preallocated pool is less than or equal to the configured threshold percentage of the total number of extents in the pre-allocated pool. For striped allocation policy, the threshold is reached when the number of full extent stripes that can be formed using free pre-allocated extents is less than or equal to the value calculated as below: threshold value = (threshold percentage of total pre-allocated extents rounded to stripes width ) / ( stripes width ) For distributed allocation policy, the threshold is reached when the number of free extents in the preallocated pool is less than or equal to the configured threshold percentage of the total number of extents in the pre-allocated pool or when more than half of the number of free extents in the preallocated pool is from the same physical volume. If a write requires an unshare operation to be performed on the snapshot, but there is no available extent in the pre-allocated extent pool, then the snapshot gets marked as over-committed. 4 Hewlett-Packard Company 4 HP-UX 11i Version 3: October 2010
If an unshare operation fails on a snapshot, it gets marked as inoperative. All reads and writes on an inoperative logical volume will fail. Snapshots are by default read-only logical volumes, but the user can choose to create writable snapshots. See lvcreate (1M) and lvchange (1M) for more information. If a logical volume has snapshots associated with it, then there could be an increase in the latencies associated with reads and writes on the original logical volume or its snapshots. The increased latency will not be incurred after the first write on the unshare unit. The following are differences between snapshot logical volumes and logical volumes that are a result of lvsplit command: The lvsplit command can be used to atomically split multiple logical volumes, where as snapshots can only be created one at a time. When a logical volume is spilt, the number of mirror copies associated with the original logical volume reduces by one. In the case of snapshots, the mirror copies associated with the original logical volume and its snapshot are independent of each other, and can be specified during creation. See lvcreate (1M). The two logical volumes that are a result of a split operation can be considered as independent logical volumes. Snapshots always are associated with the original logical volume. The original logical volume has to be available for the snapshot to be available. A split logical volume is fully allocated and does not share any data (on-disk) with the original logical volume. Snapshots, at the point of creation, share all their data with the original logical volume. Snapshots are read-only by default, where as a split logical volume is read-write. Irrespective of the number of mirrors associated with it, a logical volume can have up to 255 snapshots, where as such a logical volume cannot be split. In the case of split logical volumes, there is a certain degree of control over the physical extents that will be involved, while in the case of space-efficient snapshots, one cannot choose the physical extents that will be used on demand for data unsharing. There is very little performance degradation for writes to split logical volumes, while there is a performance degradation associated with logical volumes on a snapshot tree. NOTE: As the size of the volume group and the size of the snapshot capacity in the volume group increase, the size of the configuration backup file for the volume group also increases. Therefore, for volume group version 2.0 and higher, the default location for backing up the volume group configuration is configurable (from the default path of /etc/lvmconf). See vgcfgbackup (1M) for more details. EXAMPLES The basic steps to take to begin using LVM are as follows: Identify the disks to be used for LVM. Create an LVM data structure on each identified disk (see pvcreate (1M)). Collect all the physical volumes to form a new volume group (see vgcreate (1M)). Create logical volumes from the space in the volume group (see lvcreate (1M)). Use each logical volume as if it were a disk section (create a file system, or use for raw access). To configure disk /dev/dsk/c0t0d0 as part of a new volume groups version 1.0 named vg01: First, initialize the disk for LVM with the pvcreate command. pvcreate /dev/rdsk/c0t0d0 Then, create the pseudo device file that is used by the LVM subsystem. The vg_name directory and group file will be created automatically. Optionally, these files can be created before doing the vgcreate, as follows: mkdir /dev/vg01 mknod /dev/vg01/group c 64 0x010000 The minor number for the group file should be unique among all the volume groups on the system. It has the format 0xNN0000, where NN ranges from 00 to ff. HP-UX 11i Version 3: October 2010 5 Hewlett-Packard Company 5
Create the volume group, vg01, containing the physical volume, /dev/dsk/c0t0d0, with the vgcreate command. vgcreate /dev/vg01 /dev/dsk/c0t0d0 You can view information about the newly created volume group with the vgdisplay command. vgdisplay -v /dev/vg01 Create a logical volume of size 100 MB, named usrvol, on this volume group with the lvcreate command. lvcreate -L 100 -n usrvol /dev/vg01 This creates two device files for the logical volume, /dev/vg01/usrvol, which is the block device file, and /dev/vg01/rusrvol, which is the character (raw) device file. You can view information about the newly created logical volume with the lvdisplay command. lvdisplay /dev/vg01/usrvol Any operation allowed on a disk partition is allowed on the logical volume. Thus, you can use usrvol to hold a file system. newfs /dev/vg01/rusrvol mount /dev/vg01/usrvol /usr To use a volume group version 2.0 or higher in the above example, only few changes are required. The volume group directory and group file are created automatically in all supported versions. The vgcreate command would be changed. The following creates the volume group with an extent size of 32 megabytes and a maximum volume group size of 32 terabytes (see vgcreate (1M)). vgcreate -V 2.0 -s 32 -S 32t /dev/vg01 /dev/dsk/c0t0d0 or vgcreate -V 2.1 -s 32 -S 32t /dev/vg01 /dev/dsk/c0t0d0 or vgcreate -V 2.2 -s 32 -S 32t /dev/vg01 /dev/dsk/c0t0d0 The following creates the volume group with an unshare unit of 512 KB. vgcreate -V 2.2 -U 512 -s 32 -S 32t /dev/vg01 /dev/dsk/c0t0d0 SEE ALSO lvchange(1m), lvcreate(1m), lvdisplay(1m), lvextend(1m), lvlnboot(1m), lvmadm(1m), lvmove(1m), lvmpud(1m), lvreduce(1m), lvremove(1m), lvrmboot(1m), pvchange(1m), pvcreate(1m), pvdisplay(1m), pvmove(1m), vgcfgbackup(1m), vgcfgrestore(1m), vgchange(1m), vgcreate(1m), vgdisplay(1m), vgexport(1m), vgextend(1m), vgimport(1m), vgmodify(1m), vgmove(1m), vgreduce(1m), vgremove(1m), vgscan(1m), vgversion(1m), intro(7). HP-UX System Administration: Logical Volume Management. If HP MirrorDisk/UX is installed: lvmerge(1m), lvsplit(1m), lvsync(1m), vgsync(1m). If HP Serviceguard is installed: cmcheckconf(1m), cmquerycl(1m), Managing Serviceguard. 6 Hewlett-Packard Company 6 HP-UX 11i Version 3: October 2010