RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1181087 - Libvirt should better give an error when defining fs pool with the multiple source devices
Summary: Libvirt should better give an error when defining fs pool with the multiple s...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: John Ferlan
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-01-12 11:09 UTC by Yang Yang
Modified: 2015-11-19 06:07 UTC (History)
7 users (show)

Fixed In Version: libvirt-1.2.17-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-19 06:07:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2202 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2015-11-19 08:17:58 UTC

Description Yang Yang 2015-01-12 11:09:17 UTC
Description of problem:
Currently, it's okay to define a fs pool using multiple source device.
But pool startup failed as missing source device. Once the sources have
been mounted to the target, the pool will be activated while libvirtd
starts up. However, the pool can NOT be destroyed any more.

Version-Release number of selected component (if applicable):
libvirt-1.2.8-12.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. define a fs pool with 2 source devices
#virsh pool-define fs-pool.xml
# virsh pool-list --all
 Name                 State      Autostart
-------------------------------------------
fs                   inactive     no

# virsh pool-dumpxml fs
<pool type='fs'>
  <name>fs</name>
  <uuid>f20d9af6-5093-4710-958d-7f872e66fece</uuid>
  <capacity unit='bytes'>1063256064</capacity>
  <allocation unit='bytes'>33722368</allocation>
  <available unit='bytes'>1029533696</available>
  <source>
    <device path='/dev/vg/lvol0'/>
    <device path='/dev/sdc'/>
    <format type='xfs'/>
  </source>
  <target>
    <path>/tmp/fs</path>
    <permissions>
      <mode>0755</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>

# virsh pool-start fs
error: Failed to start pool fs
error: internal error: missing source device


2. mount the source devices to target
# mount /dev/vg/lvol0 /tmp/fs/
# mount /dev/sda7 /tmp/fs

# mount
/dev/mapper/vg-lvol0 on /tmp/fs type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/sda7 on /tmp/fs type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

3. restart libvirtd
# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

4. check the fs pool
# virsh pool-list --all
 Name                 State      Autostart
-------------------------------------------
fs                   active     no  

5. destroy fs pool
# virsh pool-destroy fs
error: Failed to destroy pool fs
error: internal error: missing source device

Actual results:
the fs pool can NOT be destroyed

Expected results:
Libvirt should better prevent to define a fs pool using multiple source devices


Additional info:

Comment 1 John Ferlan 2015-06-03 00:09:38 UTC
First of all we cannot prevent the "define" - we can prevent the start, restart, reload, etc. type paths, but the define is not where the actual checks and balances are made. 

This was "partially fixed" as a result of changes made to support keeping pool state between libvirtd restarts.

While it appears commit id 'cf7392a0d' is a relatively harmless change to just remove the 'conn' attribute from the 'checkPool' call, it was part of a much larger change which ironically allows pools to remain started between libvirtd restarts by setting up a pool state XML - see commit's 22592a3f, 6ae11909, 39b183b, 17ab5bc0, 723143a1, a9700771f, and 2a31c5f0 added to upstream libvirt v1.2.15.

Prior to the changes a pool 'active' state for a 'fs' pool was determined by the return status of a call to virStorageBackendFileSystemIsMounted() which is what the virStorageBackendFileSystemMount called in the pool startup path from virStorageBackendFileSystemStart. The *IsMounted function returns 1 when it finds the target path in the currently mounted pools list.

The difference being, the virStorageBackendFileSystemMount code has a check in it for the number of defined devices not being 1 (assuming it was zero, which is a slightly different, but related issue).  That check inhibits the pool startup. Of course as you found out, if "something" mounted the pool, then libvirtd restart would "assume" pool startup took care of it and declare the pool active.

In any case, as of the sequence of comments noted - this is no longer possible, so in effect the bug is mostly fixed; however, virStorageBackendFileSystemCheck could still return *isActive = true; if it finds the target path mounted, so I will use this bug as a way to add that check as well as fix up the error messages to be bit more clear that we're either missing the source device or we expect exactly 1 source device path.

I'm putting together some patches which I'll post upstream in the next day or so.

Comment 2 John Ferlan 2015-06-03 12:12:26 UTC
Patches posted upstream:

http://www.redhat.com/archives/libvir-list/2015-June/msg00130.html

Comment 3 John Ferlan 2015-06-05 10:32:15 UTC
Patches were pushed upstream

commit 94a1579b0a4853d9f16ffd6edfdfce555b9586b4
Author: John Ferlan <jferlan>
Date:   Tue Jun 2 19:48:56 2015 -0400

    storage: Add check for valid FS types in checkPool callback
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1181087
    
    The virStorageBackendFileSystemIsMounted is called from three source paths
    checkPool, startPool, and stopPool. Both start and stop validate the FS
    fields before calling *IsMounted; however the check path there is no call.
    This could lead the code into returning a true in "isActive" if for some
    reason the target path for the pool was mounted. The assumption being
    that if it was mounted, then we believe we started/mounted it.
    
    It's also of note that commit id '81165294' added an error message for
    the start/mount path regarding that the target is already mounted so
    fail the start. That check was adjusted by commit id '13fde7ce' to
    only message if actually mounted.
    
    At one time this led to the libvirtd restart autostart code to declare
    that the pool was active even though the startPool would inhibit startup
    and the stopPool would inhibit shutdown. The autostart path changed as
    of commit id '2a31c5f0' as part of the keep storage pools started between
    libvirtd restarts.
    
    This patch adds the same check made prior to start/mount and stop/unmount
    to ensure we have a valid configuration before attempting to see if the
    target is already mounted to declare "isActive" or not. Finding an improper
    configuration will now cause an error at checkPool, which should make it
    so we can no longer be left in a situation where the pool was started and
    we have no way to stop it.

$ git describe 94a1579b0a4853d9f16ffd6edfdfce555b9586b4
v1.2.16-88-g94a1579


In particular commit id fcf0fd52cba1fef113bde7d41c036ead9e2b103b will adjust the error message to be clearer that it's either missing the source device or that we only expected 1 device to be defined

Comment 5 yisun 2015-07-13 09:22:17 UTC
Verified on:
kernel-3.10.0-292.el7.x86_64
qemu-kvm-rhev-2.3.0-9.el7.x86_64
libvirt-1.2.17-2.el7.x86_64


1. Verify the error message
1.1 # cat pool.xml 
<pool type='fs'>
  <name>fs</name>
  <capacity unit='bytes'>1063256064</capacity>
  <allocation unit='bytes'>33722368</allocation>
  <available unit='bytes'>1029533696</available>
  <source>
	<device path='/dev/test/lvol'/>
	<device path='/dev/sdb'/>
	<format type='xfs'/>  
</source>
  <target>
    <path>/tmp/fs</path>
    <permissions>
      <mode>0755</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>


1.2 # virsh pool-define pool.xml 
Pool fs defined from pool.xml


1.3 # virsh pool-start fs
error: Failed to start pool fs
error: unsupported configuration: expected exactly 1 device for the storage pool <=== error message is clear. 


1.4 # virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
...        
 fs                   inactive   no           <==== inactive as expected. 

2. When device mounted, libvirt should not activate the pool when restarted (w/wo autostart enabled)
2.1 # virsh pool-autostart fs
Pool fs marked as autostarted
 
2.2 # mount /dev/sdb /tmp/fs
# mount /dev/test/lvol1 /tmp/fs
# mount | grep "tmp/fs"
/dev/sdb on /tmp/fs type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/test-lvol1 on /tmp/fs type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service
# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 ...
 fs                   inactive   yes        <=== not activated as expect.

2.3 # virsh pool-autostart --disable fs
Pool fs unmarked as autostarted

# mount | grep "tmp/fs"
/dev/sdb on /tmp/fs type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/test-lvol1 on /tmp/fs type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

2.4 # service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

2.5 # virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
...    
 fs                   inactive   no      <=== inactive as expect

Comment 7 errata-xmlrpc 2015-11-19 06:07:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html


Note You need to log in before you can comment on or make changes to this bug.