Chapter 8. Managing Storage

Contents

8.1. Managing Storage with Virtual Machine Manager

When managing a VM Guest on the VM Host Server itself, it is possible to access the complete file system of the VM Host Server in order to attach disks or images to the VM Guest. However, this is not possible when managing VM Guests from a remote host. For this reason, libvirt supports so called Storage Pools which can be accessed from remote machines. libvirt knows two different types of storage: volumes and pools.

Storage Volume

A storage volume is a storage device that can be assigned to a guest—a virtual disk or a CD/DVD/floppy image. Physically (on the VM Host Server) it can be a block device (a partition, a logical volume, etc.) or a file.

Storage Pool

A storage pool basically is a storage resource on the VM Host Server that can be used for storing volumes, similar to network storage for a desktop machine. Physically it can be one of the following types:

File System Directory (dir)

A directory for hosting image files. The files can be fully allocated raw files, sparsely allocated raw files, or ISO images.

Physical Disk Device (disk)

Use a complete physical disk as storage. A partition is created for each volume that is added to the pool. It is recommended to use a device name from /dev/disk/by-* rather than the simple /dev/sdX, since the latter may change.

Pre-Formatted Block Device (fs)

Specify a partition to be used in the same way as a file system directory pool (a directory for hosting image files). The only difference to using a file system directory is the fact that libvirt takes care of mounting the device.

iSCSI Target (iscsi)

Set up a pool on an iSCSI target. You need to have been logged into the volume once before, in order to use it with libvirt (use the YaST iSCSI Initiator to detect and log into a volume. It is recommended to use a device name from /dev/disk/by-id rather than the simple /dev/sdX, since the latter may change. Volume creation on iSCSI pools is not supported, instead each existing Logical Unit Number (LUN) represents a volume. Each volume/LUN also needs a valid (empty) partition table or disk label before you can use it. If missing, use for example, fdisk to add it:

~ # fdisk -cu /dev/disk/by-path/ip-192.168.2.100:3260-iscsi-iqn.2010-10.com.example:[...]-lun-2
Device contains neither a valid DOS partition table, nor Sun, SGI
or OSF disklabel
Building a new DOS disklabel with disk identifier 0xc15cdc4e.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
	
LVM Volume Group (logical)

Use a LVM volume group as a pool. You may either use a pre-defined volume group, or create a group by specifying the devices to use. Storage volumes are created as partitions on the volume.

[Warning]Deleting the LVM Based Pool

When the LVM based pool is deleted in the Storage Manager, the volume group is deleted as well. This results in a non-recoverable loss of all data stored on the pool!

Network Exported Directory (netfs)

Specify a network directory to be used in the same way as a file system directory pool (a directory for hosting image files). The only difference to using a file system directory is the fact that libvirt takes care of mounting the directory. Supported protocols are NFS and glusterfs.

SCSI Host Adapter (scsi)

Use an SCSI host adapter in almost the same way a an iSCSI target. It is recommended to use a device name from /dev/disk/by-id rather than the simple /dev/sdX, since the latter may change. Volume creation on iSCSI pools is not supported, instead each existing LUN (Logical Unit Number) represents a volume.

[Warning]Security Considerations

In order to avoid data loss or data corruption, do not attempt to use resources such as LVM volume groups, iSCSI targets, etc. that are used to build storage pools on the VM Host Server, as well. There is no need to connect to these resources from the VM Host Server or to mount them on the VM Host Server—libvirt takes care of this.

Do not mount partitions on the VM Host Server by label. Under certain circumstances it is possible that a partition is labeled from within a VM Guest with a name already existing on the VM Host Server.

When managing VM Guests from remote, a volume can only be accessed if it is located in a storage pool. On the other hand, creating new volumes is only possible within an existing storage pool. Although remote installation of a guest is currently not supported on openSUSE, there are other use cases for storage pools, such as adding an additional virtual disk. If VM Guests should be able to access CDROM or DVDROM images, they also need to be in a storage pool.

8.1. Managing Storage with Virtual Machine Manager

The Virtual Machine Manager provides a graphical interface—the Storage Manager— to manage storage volumes and pools. To access it, either right-click on a connection and choose Details, or highlight a connection and choose Edit+Connection Details. Select the Storage tab.

8.1.1. Adding a Storage Pool

To add a storage pool, proceed as follows:

  1. Click the plus symbol in the bottom left corner to open the Add a New Storage Pool Window.

  2. Provide a Name for the pool (consisting of alphanumeric characters plus _-.) and select a Type. Proceed with Forward.

  3. Specify the needed details in the following window. The data that needs to be entered depends on the type of pool you are creating.

    Type dir:
    • Target Path: Specify an existing directory.

    Type disk:
    • Target Path: The directory which hosts the devices. It is recommended to use a directory from /dev/disk/by-* rather than from /dev, since device names in the latter directory may change.

    • Format: Format of the device's partition table. Using auto should work in most cases. If not, get the needed format by running parted -l.

    • Source Path: The device, for example sda.

    • Build Pool: Activating this option formats the device. Use with care— all data on the device will be lost!

    Type fs:
    • target Path: Mount point on the VM Host Server file system.

    • Format: File system format of the device. the default value auto should work.

    • Source Path: Path to the device file. It is recommended to use a device name from /dev/disk/by-* rather than the simple /dev/sdX, since the latter may change.

    Type iscsi:

    Get the necessary data by running the following command on the VM Host Server:

    iscsiadm --mode node

    It will return a list of iSCSI volumes with the following format. The elements highlighted with a bold font are the ones needed:

    IP_ADDRESS:PORT,TPGT TARGET_NAME_(IQN)
    • Target Path: The directory containing the device file. Do not change the default /dev/disk/by-path.

    • Host Name: Host name or IP address of the iSCSI server.

    • Source Path: The iSCSI target name (IQN).

    Type logical:
    • Target Path: In case you use an existing volume group, specify the existing device path. In case of building a new LVM volume group, specify a device name in the /dev directory that does not already exist.

    • Source Path: Leave empty when using an existing volume group. When creating a new one, specify its devices here.

    • Build Pool: Only activate when creating a new volume group.

    Type netfs:
    • target Path: Mount point on the VM Host Server file system.

    • Format: Network file system protocol

    • Host Name: IP address or hostname of the server exporting the network file system.

    • Source Path: Directory on the server that is being exported.

    Type scsi:
    • Target Path: The directory containing the device file. Do not change the default /dev/disk/by-path.

    • Source Path: Name of the SCSI adapter.

    [Note]File Browsing

    Using the file browser by clicking on Browse is not possible when operating from remote.

  4. Click Finish to add the storage pool.

8.1.2. Managing Storage Pools

Virtual Machine Manager's Storage Manager lets you create or delete volumes in a pool. You may also temporarily deactivate or permanently delete existing storage pools. Changing the basic configuration of a pool is not supported.

8.1.2.1. Starting, Stopping and Deleting Pools

The purpose of storage pools is to provide block devices located on the VM Host Server, that can be added to a VM Guest when managing it from remote. In order to make a pool temporarily inaccessible from remote, you may Stop it by clicking on the stop symbol in the bottom left corner of the Storage Manager. Stopped pools are marked with State: Inactive and are grayed out in the list pane. By default, a newly created pool will be automatically started On Boot of the VM Host Server.

To Start an inactive pool and make it available from remote again click on the play symbol in the bottom left corner of the Storage Manager.

[Note]A Pool's State Does not Affect Attached Volumes

Volumes from a pool attached to VM Guests are always available, regardless of the pool's state (Active or Inactive). The state of the pool solely affects the ability to attach volumes to a VM Guest via remote management.

To permanently make a pool inaccessible, you can Delete it by clicking on the shredder symbol in the bottom left corner of the Storage Manager. You may only delete inactive pools. Deleting a pool does not physically erase its contents on VM Host Server—it only deletes the pool configuration. However, you need to be extra careful when deleting pools, especially when deleting LVM volume group-based tools:

[Warning]Deleting Storage Pools

Deleting storage pools based on local file system directories, local partitions or disks has no effect on the availability of volumes from these pools currently attached to VM Guests.

Volumes located in pools of type iSCSI, SCSI, LVM group or Network Exported Directory will become inaccessible from the VM Guest in case the pool will be deleted. Although the volumes themselves will not be deleted, the VM Host Server will no longer have access to the resources.

Volumes on iSCSI/SCSI targets or Network Exported Directory will be accessible again when creating an adequate new pool or when mounting/accessing these resources directly from the host system.

When deleting an LVM group-based storage pool, the LVM group definition will be erased and the LVM group will no longer exist on the host system. The configuration is not recoverable and all volumes from this pool are lost.

8.1.2.2. Adding Volumes to a Storage Pool

Virtual Machine Manager lets you create volumes in all storage pools, except in pools of types iSCSI or SCSI. A volume in these pools is equivalent to a LUN and cannot be changed from within libvirt.

  1. A new volume can either be created using the Storage Manager or while adding a new storage device to a VM Guest. In both cases, select a Storage Pool and then click New Volume.

  2. Specify a Name for the image and choose an image format (note that Novell currently only supports raw images). The latter option is not available on LVM group-based pools.

    Specify a Max Capacity and the amount of space that should initially be allocated. If both values differ, a sparse image file, growing on demand, will be created.

  3. Start the volume creation by clicking Finish.

8.1.2.3. Deleting Volumes From a Storage Pool

Deleting a volume can only be done from the Storage Manager, by selecting a volume and clicking Delete Volume. Confirm with Yes. Use this function with extreme care!

[Warning]No Checks Upon Volume Deletion

A volume will be deleted in any case, regardless whether it is currently used in an active or inactive VM Guest. There is no way to recover a deleted volume.

At the moment libvirt offers no tools to list all volumes currently being used in VM Guest definitions. This makes it even more dangerous to delete volumes with the Storage Manager. The following procedure describes a way to create such a list by processing the VM Guest XML definitions with XSLT:

Procedure 8.1. Listing all Storage Volumes Currently Used on a VM Host Server

  1. Create an XSLT style sheet by saving the following content to a file, for example, ~/libvirt/guest_storage_list.xsl:

    <?xml version="1.0" encoding="UTF-8"?>
    <xsl:stylesheet version="1.0"
      xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
      <xsl:output method="text"/>
      <xsl:template match="text()"/>
      <xsl:strip-space elements="*"/>
      <xsl:template match="disk">
        <xsl:text>  </xsl:text>
        <xsl:value-of select="(source/@file|source/@dev|source/@dir)[1])"/>
        <xsl:text>&#10;</xsl:text>
      </xsl:template>
    </xsl:stylesheet>
    
  2. Run the following commands in a shell. It is assumed that the guest's XML definitions are all stored in the default location (/etc/libvirt/qemu). xsltproc is provided by the package libxslt.

    SSHEET="$HOME/libvirt/guest_storage_list.xsl"
    cd /etc/libvirt/qemu
    for FILE in *.xml; do
      basename $FILE .xml 
      xsltproc $SSHEET $FILE
    done