Bug 878400 - virsh pool-destroy should fail with error info when pool is in using
virsh pool-destroy should fail with error info when pool is in using
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt (Show other bugs)
6.4
x86_64 Linux
medium Severity medium
: rc
: ---
Assigned To: Osier Yang
Virtualization Bugs
: Regression
Depends On:
Blocks: 886216
  Show dependency treegraph
 
Reported: 2012-11-20 06:08 EST by EricLee
Modified: 2013-02-21 02:27 EST (History)
10 users (show)

See Also:
Fixed In Version: libvirt-0.10.2-11.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-02-21 02:27:04 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description EricLee 2012-11-20 06:08:54 EST
Description of problem:
virsh pool-destroy should fail with error info when pool is in using

Version-Release number of selected component (if applicable):
# rpm -qa libvirt qemu-kvm-rhev kernel
libvirt-0.10.2-9.el6.x86_64
kernel-2.6.32-338.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.330.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Prepare a little partition from the unused disk, for example /dev/sda6, and a pool.xml like:
<pool type='fs'>
  <name>mypool</name>
  <source>
    <device path='/dev/sda6'/>
    <format type='auto'/>
  </source>
  <target>
    <path>/var/lib/libvirt/images/mypool</path>
  </target>
</pool>

2. Define, build and start the pool

# virsh pool-define pool.xml

# virsh pool-build mypool

# virsh pool-start mypool

3. Check the pool is working fine.

# df -h

4. Prepare the following xml to create volume in the pool.

# cat vol-disk-template.xml
<volume>
  <name>disk1.img</name>
  <capacity unit='M'>10</capacity>
  <allocation unit='M'>0</allocation>
  <target>
    <path>/var/lib/libvirt/images/mypool/disk1.img</path>
    <format type='raw'/>
  </target>
</volume>

5. Create volume untill the allocation is exceed the disk capability.

# virsh vol-create mypool vol-disk-template.xml

6. Attach the volume disk1.img to an existing guest as secondary disk, then start the guest. Keep the disk in the guest.

7. # virsh pool-destroy mypool
Pool mypool destroyed

8. And the command exited successfully:
# echo $?
0

9. And the pool is destroyed(looks like):
# virsh pool-list --all
Name                 State      Autostart
-----------------------------------------    
mypool               inactive   no    

10. But # mount:
/dev/sda6 on /var/lib/libvirt/images/mypool type ext3 (rw)
and # df -h shows that /dev/sda6 is still mounted.

11. If restart the pool will get error:
# virsh pool-start mypool
error: Failed to start pool mypool
error: Requested operation is not valid: Target '/var/lib/libvirt/images/mypool' is already mounted

Actual results:
As steps.
And there is error in libvirtd.log:
2012-11-20 06:57:55.887+0000: 15676: error : virCommandWait:2345 : internal error Child process (/bin/umount /var/lib/libvirt/images/mypool) unexpected exit status 1: umount: /var/lib/libvirt/images/mypool: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))

Expected results:
The command should fail with error info.

Additional info:
Comment 1 EricLee 2012-11-20 06:12:31 EST
Hi Dave,

This should be a regression as libvirt-0.9.10-21.el6.x86_64 works well:
# virsh pool-destroy mypool
error: Failed to destroy pool mypool
error: internal error Child process (/bin/umount /var/lib/libvirt/images/mypool) status unexpected: exit status 1

Should I set a Regression keywords for the bug?

Thanks,
EricLee
Comment 3 Dave Allan 2012-11-20 09:17:41 EST
(In reply to comment #1)
> Should I set a Regression keywords for the bug?

Hi Eric, thanks for asking.  That's odd, since I don't think there was much change in this area of the code.  Osier, can you have a look at this and see what the root cause is?  Thanks, Dave
Comment 4 Osier Yang 2012-11-20 10:40:44 EST
(In reply to comment #3)
> (In reply to comment #1)
> > Should I set a Regression keywords for the bug?
> 
> Hi Eric, thanks for asking.  That's odd, since I don't think there was much
> change in this area of the code.  Osier, can you have a look at this and see
> what the root cause is?  Thanks, Dave

<...>
2012-11-20 06:57:55.887+0000: 15676: error : virCommandWait:2345 : internal error Child process (/bin/umount /var/lib/libvirt/images/mypool) unexpected exit status 1: umount: /var/lib/libvirt/images/mypool: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
</...>

This proves the underlying things are just working as expected, I could see
the problem is caused by we have bug on handling the return value. Though I
run out of time today to track down the root cause, I think it's fair to set
regression.
Comment 5 Dave Allan 2012-11-20 13:47:52 EST
Strange, but ok, marked as regression.
Comment 8 Peter Krempa 2012-11-26 10:08:38 EST
The fix was pushed upstream:

commit f4ac06569a8ffce24fb8c07a0fc01574e38de6e4
Author: Osier Yang <jyang@redhat.com>
Date:   Wed Nov 21 11:22:39 2012 +0800

    storage: Fix bug of fs pool destroying
    
    Regression introduced by commit 258e06c85b7, "ret" could be set to 1
    or 0 by virStorageBackendFileSystemIsMounted before goto cleanup.
    This could mislead the callers (up to the public API
    virStoragePoolDestroy) to return success even the underlying umount
    command fails.
Comment 12 zhe peng 2012-12-06 05:26:19 EST
I can reproduce this with:
libvirt-0.10.2-9.el6.x86_64

verify with :
libvirt-0.10.2-11.el6.x86_64

step:
1: prepare pool xml:
<pool type='fs'>
  <name>mypool</name>
    <source>
        <device path='/dev/sda11'/>
            <format type='auto'/>
              </source>
                <target>
                    <path>/var/lib/libvirt/images/mypool</path>
                      </target>
                      </pool>
2:# virsh pool-define pool.xml
Pool mypool defined from pool.xml

# virsh pool-build mypool
Pool mypool built

# virsh pool-start mypool
Pool mypool started

3: check status
#mount
/dev/sda11 on /var/lib/libvirt/images/mypool type ext4 (rw)

4:Prepare the following xml to create volume in the pool.

# cat vol.xml
<volume>
  <name>disk_new.img</name>
  <capacity unit='M'>10</capacity>
  <allocation unit='M'>0</allocation>
  <target>
    <path>/var/lib/libvirt/images/mypool/disk_new.img</path>
    <format type='raw'/>
  </target>
</volume>
5. Create volume untill the allocation is exceed the disk capability.

# virsh vol-create mypool vol.xml

6. Attach the volume disk1.img to an existing guest as secondary disk, then start the guest. Keep the disk in the guest.

7.
# virsh pool-destroy mypool
error: Failed to destroy pool mypool
error: internal error Child process (/bin/umount /var/lib/libvirt/images/mypool) unexpected exit status 1: umount: /var/lib/libvirt/images/mypool: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))

the pool still active and error msg is clear,verification passed.
Comment 13 errata-xmlrpc 2013-02-21 02:27:04 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0276.html

Note You need to log in before you can comment on or make changes to this bug.