Bug 1181087
| Summary: | Libvirt should better give an error when defining fs pool with the multiple source devices | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Yang Yang <yanyang> |
| Component: | libvirt | Assignee: | John Ferlan <jferlan> |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 7.1 | CC: | dyuan, jferlan, rbalakri, shyu, xuzhang, ydu, yisun |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-1.2.17-1.el7 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-11-19 06:07:37 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
First of all we cannot prevent the "define" - we can prevent the start, restart, reload, etc. type paths, but the define is not where the actual checks and balances are made. This was "partially fixed" as a result of changes made to support keeping pool state between libvirtd restarts. While it appears commit id 'cf7392a0d' is a relatively harmless change to just remove the 'conn' attribute from the 'checkPool' call, it was part of a much larger change which ironically allows pools to remain started between libvirtd restarts by setting up a pool state XML - see commit's 22592a3f, 6ae11909, 39b183b, 17ab5bc0, 723143a1, a9700771f, and 2a31c5f0 added to upstream libvirt v1.2.15. Prior to the changes a pool 'active' state for a 'fs' pool was determined by the return status of a call to virStorageBackendFileSystemIsMounted() which is what the virStorageBackendFileSystemMount called in the pool startup path from virStorageBackendFileSystemStart. The *IsMounted function returns 1 when it finds the target path in the currently mounted pools list. The difference being, the virStorageBackendFileSystemMount code has a check in it for the number of defined devices not being 1 (assuming it was zero, which is a slightly different, but related issue). That check inhibits the pool startup. Of course as you found out, if "something" mounted the pool, then libvirtd restart would "assume" pool startup took care of it and declare the pool active. In any case, as of the sequence of comments noted - this is no longer possible, so in effect the bug is mostly fixed; however, virStorageBackendFileSystemCheck could still return *isActive = true; if it finds the target path mounted, so I will use this bug as a way to add that check as well as fix up the error messages to be bit more clear that we're either missing the source device or we expect exactly 1 source device path. I'm putting together some patches which I'll post upstream in the next day or so. Patches posted upstream: http://www.redhat.com/archives/libvir-list/2015-June/msg00130.html Patches were pushed upstream
commit 94a1579b0a4853d9f16ffd6edfdfce555b9586b4
Author: John Ferlan <jferlan>
Date: Tue Jun 2 19:48:56 2015 -0400
storage: Add check for valid FS types in checkPool callback
https://bugzilla.redhat.com/show_bug.cgi?id=1181087
The virStorageBackendFileSystemIsMounted is called from three source paths
checkPool, startPool, and stopPool. Both start and stop validate the FS
fields before calling *IsMounted; however the check path there is no call.
This could lead the code into returning a true in "isActive" if for some
reason the target path for the pool was mounted. The assumption being
that if it was mounted, then we believe we started/mounted it.
It's also of note that commit id '81165294' added an error message for
the start/mount path regarding that the target is already mounted so
fail the start. That check was adjusted by commit id '13fde7ce' to
only message if actually mounted.
At one time this led to the libvirtd restart autostart code to declare
that the pool was active even though the startPool would inhibit startup
and the stopPool would inhibit shutdown. The autostart path changed as
of commit id '2a31c5f0' as part of the keep storage pools started between
libvirtd restarts.
This patch adds the same check made prior to start/mount and stop/unmount
to ensure we have a valid configuration before attempting to see if the
target is already mounted to declare "isActive" or not. Finding an improper
configuration will now cause an error at checkPool, which should make it
so we can no longer be left in a situation where the pool was started and
we have no way to stop it.
$ git describe 94a1579b0a4853d9f16ffd6edfdfce555b9586b4
v1.2.16-88-g94a1579
In particular commit id fcf0fd52cba1fef113bde7d41c036ead9e2b103b will adjust the error message to be clearer that it's either missing the source device or that we only expected 1 device to be defined
Verified on:
kernel-3.10.0-292.el7.x86_64
qemu-kvm-rhev-2.3.0-9.el7.x86_64
libvirt-1.2.17-2.el7.x86_64
1. Verify the error message
1.1 # cat pool.xml
<pool type='fs'>
<name>fs</name>
<capacity unit='bytes'>1063256064</capacity>
<allocation unit='bytes'>33722368</allocation>
<available unit='bytes'>1029533696</available>
<source>
<device path='/dev/test/lvol'/>
<device path='/dev/sdb'/>
<format type='xfs'/>
</source>
<target>
<path>/tmp/fs</path>
<permissions>
<mode>0755</mode>
<owner>-1</owner>
<group>-1</group>
</permissions>
</target>
</pool>
1.2 # virsh pool-define pool.xml
Pool fs defined from pool.xml
1.3 # virsh pool-start fs
error: Failed to start pool fs
error: unsupported configuration: expected exactly 1 device for the storage pool <=== error message is clear.
1.4 # virsh pool-list --all
Name State Autostart
-------------------------------------------
...
fs inactive no <==== inactive as expected.
2. When device mounted, libvirt should not activate the pool when restarted (w/wo autostart enabled)
2.1 # virsh pool-autostart fs
Pool fs marked as autostarted
2.2 # mount /dev/sdb /tmp/fs
# mount /dev/test/lvol1 /tmp/fs
# mount | grep "tmp/fs"
/dev/sdb on /tmp/fs type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/test-lvol1 on /tmp/fs type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
# service libvirtd restart
Redirecting to /bin/systemctl restart libvirtd.service
# virsh pool-list --all
Name State Autostart
-------------------------------------------
...
fs inactive yes <=== not activated as expect.
2.3 # virsh pool-autostart --disable fs
Pool fs unmarked as autostarted
# mount | grep "tmp/fs"
/dev/sdb on /tmp/fs type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/test-lvol1 on /tmp/fs type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
2.4 # service libvirtd restart
Redirecting to /bin/systemctl restart libvirtd.service
2.5 # virsh pool-list --all
Name State Autostart
-------------------------------------------
...
fs inactive no <=== inactive as expect
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2202.html |
Description of problem: Currently, it's okay to define a fs pool using multiple source device. But pool startup failed as missing source device. Once the sources have been mounted to the target, the pool will be activated while libvirtd starts up. However, the pool can NOT be destroyed any more. Version-Release number of selected component (if applicable): libvirt-1.2.8-12.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. define a fs pool with 2 source devices #virsh pool-define fs-pool.xml # virsh pool-list --all Name State Autostart ------------------------------------------- fs inactive no # virsh pool-dumpxml fs <pool type='fs'> <name>fs</name> <uuid>f20d9af6-5093-4710-958d-7f872e66fece</uuid> <capacity unit='bytes'>1063256064</capacity> <allocation unit='bytes'>33722368</allocation> <available unit='bytes'>1029533696</available> <source> <device path='/dev/vg/lvol0'/> <device path='/dev/sdc'/> <format type='xfs'/> </source> <target> <path>/tmp/fs</path> <permissions> <mode>0755</mode> <owner>-1</owner> <group>-1</group> </permissions> </target> </pool> # virsh pool-start fs error: Failed to start pool fs error: internal error: missing source device 2. mount the source devices to target # mount /dev/vg/lvol0 /tmp/fs/ # mount /dev/sda7 /tmp/fs # mount /dev/mapper/vg-lvol0 on /tmp/fs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/sda7 on /tmp/fs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) 3. restart libvirtd # service libvirtd restart Redirecting to /bin/systemctl restart libvirtd.service 4. check the fs pool # virsh pool-list --all Name State Autostart ------------------------------------------- fs active no 5. destroy fs pool # virsh pool-destroy fs error: Failed to destroy pool fs error: internal error: missing source device Actual results: the fs pool can NOT be destroyed Expected results: Libvirt should better prevent to define a fs pool using multiple source devices Additional info: