RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 878400 - virsh pool-destroy should fail with error info when pool is in using
Summary: virsh pool-destroy should fail with error info when pool is in using
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.4
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Osier Yang
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 886216
TreeView+ depends on / blocked
 
Reported: 2012-11-20 11:08 UTC by EricLee
Modified: 2013-02-21 07:27 UTC (History)
10 users (show)

Fixed In Version: libvirt-0.10.2-11.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-21 07:27:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2013:0276 0 normal SHIPPED_LIVE Moderate: libvirt security, bug fix, and enhancement update 2013-02-20 21:18:26 UTC

Description EricLee 2012-11-20 11:08:54 UTC
Description of problem:
virsh pool-destroy should fail with error info when pool is in using

Version-Release number of selected component (if applicable):
# rpm -qa libvirt qemu-kvm-rhev kernel
libvirt-0.10.2-9.el6.x86_64
kernel-2.6.32-338.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.330.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Prepare a little partition from the unused disk, for example /dev/sda6, and a pool.xml like:
<pool type='fs'>
  <name>mypool</name>
  <source>
    <device path='/dev/sda6'/>
    <format type='auto'/>
  </source>
  <target>
    <path>/var/lib/libvirt/images/mypool</path>
  </target>
</pool>

2. Define, build and start the pool

# virsh pool-define pool.xml

# virsh pool-build mypool

# virsh pool-start mypool

3. Check the pool is working fine.

# df -h

4. Prepare the following xml to create volume in the pool.

# cat vol-disk-template.xml
<volume>
  <name>disk1.img</name>
  <capacity unit='M'>10</capacity>
  <allocation unit='M'>0</allocation>
  <target>
    <path>/var/lib/libvirt/images/mypool/disk1.img</path>
    <format type='raw'/>
  </target>
</volume>

5. Create volume untill the allocation is exceed the disk capability.

# virsh vol-create mypool vol-disk-template.xml

6. Attach the volume disk1.img to an existing guest as secondary disk, then start the guest. Keep the disk in the guest.

7. # virsh pool-destroy mypool
Pool mypool destroyed

8. And the command exited successfully:
# echo $?
0

9. And the pool is destroyed(looks like):
# virsh pool-list --all
Name                 State      Autostart
-----------------------------------------    
mypool               inactive   no    

10. But # mount:
/dev/sda6 on /var/lib/libvirt/images/mypool type ext3 (rw)
and # df -h shows that /dev/sda6 is still mounted.

11. If restart the pool will get error:
# virsh pool-start mypool
error: Failed to start pool mypool
error: Requested operation is not valid: Target '/var/lib/libvirt/images/mypool' is already mounted

Actual results:
As steps.
And there is error in libvirtd.log:
2012-11-20 06:57:55.887+0000: 15676: error : virCommandWait:2345 : internal error Child process (/bin/umount /var/lib/libvirt/images/mypool) unexpected exit status 1: umount: /var/lib/libvirt/images/mypool: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))

Expected results:
The command should fail with error info.

Additional info:

Comment 1 EricLee 2012-11-20 11:12:31 UTC
Hi Dave,

This should be a regression as libvirt-0.9.10-21.el6.x86_64 works well:
# virsh pool-destroy mypool
error: Failed to destroy pool mypool
error: internal error Child process (/bin/umount /var/lib/libvirt/images/mypool) status unexpected: exit status 1

Should I set a Regression keywords for the bug?

Thanks,
EricLee

Comment 3 Dave Allan 2012-11-20 14:17:41 UTC
(In reply to comment #1)
> Should I set a Regression keywords for the bug?

Hi Eric, thanks for asking.  That's odd, since I don't think there was much change in this area of the code.  Osier, can you have a look at this and see what the root cause is?  Thanks, Dave

Comment 4 Osier Yang 2012-11-20 15:40:44 UTC
(In reply to comment #3)
> (In reply to comment #1)
> > Should I set a Regression keywords for the bug?
> 
> Hi Eric, thanks for asking.  That's odd, since I don't think there was much
> change in this area of the code.  Osier, can you have a look at this and see
> what the root cause is?  Thanks, Dave

<...>
2012-11-20 06:57:55.887+0000: 15676: error : virCommandWait:2345 : internal error Child process (/bin/umount /var/lib/libvirt/images/mypool) unexpected exit status 1: umount: /var/lib/libvirt/images/mypool: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
</...>

This proves the underlying things are just working as expected, I could see
the problem is caused by we have bug on handling the return value. Though I
run out of time today to track down the root cause, I think it's fair to set
regression.

Comment 5 Dave Allan 2012-11-20 18:47:52 UTC
Strange, but ok, marked as regression.

Comment 8 Peter Krempa 2012-11-26 15:08:38 UTC
The fix was pushed upstream:

commit f4ac06569a8ffce24fb8c07a0fc01574e38de6e4
Author: Osier Yang <jyang>
Date:   Wed Nov 21 11:22:39 2012 +0800

    storage: Fix bug of fs pool destroying
    
    Regression introduced by commit 258e06c85b7, "ret" could be set to 1
    or 0 by virStorageBackendFileSystemIsMounted before goto cleanup.
    This could mislead the callers (up to the public API
    virStoragePoolDestroy) to return success even the underlying umount
    command fails.

Comment 12 zhe peng 2012-12-06 10:26:19 UTC
I can reproduce this with:
libvirt-0.10.2-9.el6.x86_64

verify with :
libvirt-0.10.2-11.el6.x86_64

step:
1: prepare pool xml:
<pool type='fs'>
  <name>mypool</name>
    <source>
        <device path='/dev/sda11'/>
            <format type='auto'/>
              </source>
                <target>
                    <path>/var/lib/libvirt/images/mypool</path>
                      </target>
                      </pool>
2:# virsh pool-define pool.xml
Pool mypool defined from pool.xml

# virsh pool-build mypool
Pool mypool built

# virsh pool-start mypool
Pool mypool started

3: check status
#mount
/dev/sda11 on /var/lib/libvirt/images/mypool type ext4 (rw)

4:Prepare the following xml to create volume in the pool.

# cat vol.xml
<volume>
  <name>disk_new.img</name>
  <capacity unit='M'>10</capacity>
  <allocation unit='M'>0</allocation>
  <target>
    <path>/var/lib/libvirt/images/mypool/disk_new.img</path>
    <format type='raw'/>
  </target>
</volume>
5. Create volume untill the allocation is exceed the disk capability.

# virsh vol-create mypool vol.xml

6. Attach the volume disk1.img to an existing guest as secondary disk, then start the guest. Keep the disk in the guest.

7.
# virsh pool-destroy mypool
error: Failed to destroy pool mypool
error: internal error Child process (/bin/umount /var/lib/libvirt/images/mypool) unexpected exit status 1: umount: /var/lib/libvirt/images/mypool: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))

the pool still active and error msg is clear,verification passed.

Comment 13 errata-xmlrpc 2013-02-21 07:27:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0276.html


Note You need to log in before you can comment on or make changes to this bug.