RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1977155 - Snapshot deletion fails for shutoff VM when the disk is using the pool/volume format for local directories.
Summary: Snapshot deletion fails for shutoff VM when the disk is using the pool/volume...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: beta
: ---
Assignee: Peter Krempa
QA Contact: yisun
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-06-29 05:30 UTC by YunmingYang
Modified: 2022-05-17 13:02 UTC (History)
10 users (show)

Fixed In Version: libvirt-7.7.0-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-17 12:45:04 UTC
Type: Bug
Target Upstream Version: 7.7.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2022:2390 0 None None None 2022-05-17 12:45:18 UTC

Description YunmingYang 2021-06-29 05:30:26 UTC
Description of problem:
For a shutoff VM, attach disk by 'virsh attach-device ${domain} ${device_xml}'

And here is the xml of device:
<disk type="volume" device="disk">
    <driver name="qemu" type="qcow2" cache="default"/>
    <source volume="${volume_name}" pool="${pool_name}"/>
    <target dev="vda" bus="virtio"/>
</disk>

Then create a snapshot for the VM with this disk by "virsh snapshot-create-as ${domain_name} ${snapshot_name}"

And then, try to delete this snapshot by "virsh snapshot-delete ${domain_name} --snapshotname ${snapshot_name}". 

And there will be an error like "
error: Failed to delete snapshot ${snapshot_name}
error: Requested operation is not valid: can't manipulate inactive snapshots of disk 'vda'
"


Version-Release number of selected components (if applicable):
libvirt-libs-7.4.0-1.el9.x86_64
libvirt-daemon-7.4.0-1.el9.x86_64
libvirt-daemon-driver-storage-core-7.4.0-1.el9.x86_64
libvirt-daemon-driver-storage-disk-7.4.0-1.el9.x86_64
libvirt-daemon-driver-storage-iscsi-7.4.0-1.el9.x86_64
libvirt-daemon-driver-storage-iscsi-direct-7.4.0-1.el9.x86_64
libvirt-daemon-driver-storage-logical-7.4.0-1.el9.x86_64
libvirt-daemon-driver-storage-mpath-7.4.0-1.el9.x86_64
libvirt-daemon-driver-storage-scsi-7.4.0-1.el9.x86_64
libvirt-daemon-driver-interface-7.4.0-1.el9.x86_64
libvirt-daemon-driver-nwfilter-7.4.0-1.el9.x86_64
libvirt-daemon-driver-secret-7.4.0-1.el9.x86_64
libvirt-glib-4.0.0-2.el9.x86_64
libvirt-dbus-1.4.0-4.el9.x86_64
python3-libvirt-7.3.0-1.el9.x86_64
libvirt-daemon-driver-network-7.4.0-1.el9.x86_64
libvirt-daemon-driver-nodedev-7.4.0-1.el9.x86_64
libvirt-client-7.4.0-1.el9.x86_64
libvirt-daemon-driver-qemu-7.4.0-1.el9.x86_64
libvirt-daemon-driver-storage-rbd-7.4.0-1.el9.x86_64
libvirt-daemon-driver-storage-7.4.0-1.el9.x86_64
libvirt-daemon-kvm-7.4.0-1.el9.x86_64


How reproducible:
100%


Steps to Reproduce:
1 Prepare a shutoff VM
2 attach disk by 'virsh attach-device --persistent ${domain_name} ${xml_path}'
  and here is the xml:
  <disk type="volume" device="disk">
    <driver name="qemu" type="qcow2" cache="default"/>
    <source volume="${volume_name}" pool="${pool_name}"/>
    <target dev="vda" bus="virtio"/>
  </disk>
3 Create a snapshot by "virsh snapshot-create-as ${domain_name} ${snapshot_name}"
4 Delete the snapshot by "virsh snapshot-delete ${domain_name} ${snapshot_name}"


Actual results:
1 After step 4, there will be an error "
error: Failed to delete snapshot ${snapshot_name}
error: Requested operation is not valid: can't manipulate inactive snapshots of disk 'vda'
"

Expected results: 
1 After step 4, The snapshot can be deleted


Additional info:

Comment 1 YunmingYang 2021-06-29 06:20:08 UTC
Also here is the pool xml:
<pool type='dir'>
  <name>images</name>
  <uuid>b7a629b9-c21e-4367-bc9d-3045d29799f7</uuid>
  <capacity unit='bytes'>75125227520</capacity>
  <allocation unit='bytes'>6457651200</allocation>
  <available unit='bytes'>68667576320</available>
  <source>
  </source>
  <target>
    <path>/var/lib/libvirt/images</path>
    <permissions>
      <mode>0711</mode>
      <owner>0</owner>
      <group>0</group>
      <label>system_u:object_r:virt_image_t:s0</label>
    </permissions>
  </target>
</pool>

Comment 2 Peter Krempa 2021-08-25 13:50:59 UTC
Fixed upstream by:

commit 97e4fb3c106fe38c1440e762014390a9821a4d69
Author: Peter Krempa <pkrempa>
Date:   Fri Jul 2 16:00:05 2021 +0200

    qemu: snapshot: Translate 'volume' disks before attempting offline snapshot manipulation
    
    When the VM is inactive the 'virStorageSource' struct doesn't have the
    necessary data pointing to the actual storage. This is a problem for
    inactive snapshot operations on VMs which use disk type='volume'.
    
    Add the translation steps for reversion and deletion of snapshots.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1977155
    Resolves: https://gitlab.com/libvirt/libvirt/-/issues/202
    Signed-off-by: Peter Krempa <pkrempa>
    Reviewed-by: Ján Tomko <jtomko>

v7.6.0-310-g97e4fb3c10

Comment 3 yisun 2021-09-29 08:05:44 UTC
[root@dell-per740xd-25 ~]# rpm -qa | grep ^libvirt-7
libvirt-7.7.0-3.el9.x86_64

1. having a shutoff vm with disk type = 'volume'
[root@dell-per740xd-25 ~]# virsh vol-list images-1
 Name                          Path
----------------------------------------------------------------------------------------------------
...
 vda.qcow2                     /var/lib/avocado/data/avocado-vt/images/vda.qcow2


[root@dell-per740xd-25 ~]# virsh dumpxml vm1 | awk  '/<disk/,/<\/disk/'
    <disk type='volume' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source pool='images-1' volume='vda.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>

[root@dell-per740xd-25 ~]# virsh domstate vm1
shut off

2. create internal snapshot for it
[root@dell-per740xd-25 ~]# virsh snapshot-create-as vm1 s1
Domain snapshot s1 created	Snapshot created succesfully and can be found in qcow2 file

[root@dell-per740xd-25 ~]# qemu-img info /var/lib/avocado/data/avocado-vt/images/vda.qcow2 | grep s1 -B2
Snapshot list:
ID        TAG               VM SIZE                DATE     VM CLOCK     ICOUNT
1         s1                    0 B 2021-09-29 03:44:52 00:00:00.000          0

3. delete the snapshot
[root@dell-per740xd-25 ~]# virsh snapshot-delete vm1 s1
Domain snapshot s1 deleted

Comment 6 errata-xmlrpc 2022-05-17 12:45:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (new packages: libvirt), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:2390


Note You need to log in before you can comment on or make changes to this bug.