Bug 1821988 - Remove the storage volume failed with "--remove-all-storage --delete-storage-volume-snapshots" during undefine a domain
Summary: Remove the storage volume failed with "--remove-all-storage --delete-storage-...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 8.3
Assignee: Peter Krempa
QA Contact: gaojianan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-08 02:36 UTC by gaojianan
Modified: 2020-11-19 09:04 UTC (History)
10 users (show)

Fixed In Version: libvirt-6.3.0-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-17 17:48:08 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Libvirtd log (96.38 KB, text/plain)
2020-04-08 02:36 UTC, gaojianan
no flags Details

Description gaojianan 2020-04-08 02:36:57 UTC
Created attachment 1677092 [details]
Libvirtd log

Steps to Reproduce:
1.Prepare an image in ceph server and create snapshot
# qemu-img create -f raw rbd:libvirt-pool/test.img:id=admin:key=$key:auth_supported=cephx:mon_host=$ip 100M

# rbd snap create libvirt-pool/test.img@sn1 --keyring /etc/ceph/ceph.client.admin.keyring -c /etc/ceph/ceph.conf

# rbd snap ls libvirt-pool/test.img --keyring /etc/ceph/ceph.client.admin.keyring -c /etc/ceph/ceph.conf
SNAPID NAME SIZE    PROTECTED TIMESTAMP                
     9 sn1  100 MiB           Tue Apr  7 16:26:26 2020

2.Create a storage rbd pool
# virsh pool-dumpxml rbd-pool
<pool type='rbd'>
  <name>rbd-pool</name>
  <uuid>8c167270-e759-4e14-91f5-1f5ef3dfb8d1</uuid>
  <capacity unit='bytes'>169651208192</capacity>
  <allocation unit='bytes'>4523556864</allocation>
  <available unit='bytes'>45678919680</available>
  <source>
    <host name='10.66.146.31' port='6789'/>
    <name>libvirt-pool</name>
    <auth type='chap' username='admin'>
      <secret type='ceph' usage='client.ceph'/>
    </auth>
  </source>
</pool>

# virsh vol-list rbd-pool
 Name             Path
-----------------------------------------------
 test.img         libvirt-pool/test.img

3.Define a guest with this rbd image
# virsh define test1111.xml
Domain test1 defined from test1111.xml

# virsh dumpxml test1 |grep "auth" -A5 -B5
...
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='rbd' name='libvirt-pool/test.img'>
        <host name='$ip'/>
        <auth username='admin'>
          <secret type='ceph' usage='client.ceph'/>
        </auth>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </disk>
...

Start the domain and it works well:
# virsh start test1
Domain test1 started

4.Destroy and undefine the domain with removing its backing volume
# virsh destroy test1
Domain test1 destroyed

# virsh undefine test1 --remove-all-storage --delete-storage-volume-snapshots
Domain test1 has been undefined
error: Failed to remove storage volume 'vda'(libvirt-pool/test.img)
error: failed to remove volume 'libvirt-pool/test.img': No such file or directory

Double check in libvirt and ceph server:
# virsh pool-refresh rbd-pool
Pool rbd-pool refreshed

# virsh vol-list rbd-pool
 Name             Path
-----------------------------------------------
 test.img         libvirt-pool/test.img

# rbd snap ls libvirt-pool/test.img --keyring /etc/ceph/ceph.client.admin.keyring -c /etc/ceph/ceph.conf
SNAPID NAME SIZE    PROTECTED TIMESTAMP                
     9 sn1  100 MiB           Tue Apr  7 16:26:26 2020


Actual results:
As step4,undefine with removing its backing volume failed,and the error should be updated.

Expected results:
Undefine the domain and remove the backing image in rbd-pool.

Additional info:
And according to the man page:
The  --delete-storage-volume-snapshots  (previously  --delete-snapshots)  flag specifies that any snapshots associated with the storage volume should be deleted as well. 
So it should be successful here.

Comment 1 gaojianan 2020-04-08 02:40:32 UTC
Version-Release number of selected component (if applicable):
libvirt-6.0.0-15.virtcov.el8.x86_64
qemu-kvm-4.2.0-17.module+el8.2.0+6141+0f540f16.x86_64

How reproducible:
100%

Comment 2 Peter Krempa 2020-04-09 11:51:35 UTC
Looks like the commit changing the flag name forgot to fix one occurence in the code.

Comment 3 Peter Krempa 2020-04-14 17:16:55 UTC
Fixed upstream:

commit a33046f3c3394a6c151b82c6c187e7fb5bc60065
Author: Peter Krempa <pkrempa>
Date:   Thu Apr 9 15:25:35 2020 +0200

    virsh: cmdUndefine: Properly extract delete-storage-volume-snapshots flag
    
    Commit 86608f787ee added the above flag as an alias for ambiguous
    'delete-snapshots' flag, but forgot to actually change the code that
    extracts it, thus the new version actually doesn't work.

Comment 7 gaojianan 2020-05-21 13:23:46 UTC
Try to verified it at libvirt version:
libvirt-6.3.0-1.module+el8.3.0+6478+69f490bb.x86_64

Setup step is the same as https://bugzilla.redhat.com/show_bug.cgi?id=1821988#c0
Try to delete the volume with its snapshots
# virsh undefine avocado-vt-vm1 --remove-all-storage --delete-storage-volume-snapshots 
Domain avocado-vt-vm1 has been undefined
Volume 'vda'(libvirt-pool/test2.img) removed.

# virsh vol-list rbd-pool 
 Name         Path
---------------------------------------
 jinqi2.img   libvirt-pool/jinqi2.img
 test1.img    libvirt-pool/test1.img

Not found test2.img 

Search by rbd drive:
# rbd snap ls libvirt-pool/test2.img --keyring /etc/ceph/ceph.client.admin.keyring -c /etc/ceph/ceph.conf
rbd: error opening image test2.img: (2) No such file or directory


Work as expected,so verified.

Comment 10 errata-xmlrpc 2020-11-17 17:48:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:8.3 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5137


Note You need to log in before you can comment on or make changes to this bug.