Bug 1680304 - virsh snapshot-create --redefine fails when it should not
Summary: virsh snapshot-create --redefine fails when it should not
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.1
Hardware: All
OS: All
medium
medium
Target Milestone: rc
: 8.1
Assignee: Eric Blake
QA Contact: yisun
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-23 19:04 UTC by Eric Blake
Modified: 2020-11-06 03:45 UTC (History)
6 users (show)

Fixed In Version: libvirt-5.3.0-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-06 07:12:59 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3723 0 None None None 2019-11-06 07:13:39 UTC

Description Eric Blake 2019-02-23 19:04:32 UTC
Description of problem:
The point of 'virsh snapshot-create --redefine' is to restore the metadata associated with a snapshot that was previously created.  For example, if you are migrating a domain with snapshots from one host to another, libvirt currently refuses to migrate while snapshot metadata is present (a limitation that we should maybe fix someday, but that's a separate bug); so the workaround is to 'virsh snapshot-dumpxml $dom $snap > $snap.xml && virsh snapshot-delete $dom $snap' for each snapshot on the source, then migrate, then 'virsh snapshot-create --redefine $dom $snap.xml' on the destination, to get back to the same state.

However, there are some scenarios where this needlessly fails, when dealing with external snapshots created for offline domains. There are also problems when the guest is in pmsuspend state.

Version-Release number of selected component (if applicable):
libvirt-4.5.0-10.el7

How reproducible:
100%

Steps to Reproduce:
1. Create a domain with a raw disk (shut the guest down afterwards):
$ virt-install --import --name=f29tmp --ram=2048 --os-variant=fedora29 --disk=path=f29tmp.img,format=raw
2. Prove that the default of internal snapshots is impossible (good, since raw disks do not support internal snapshots):
$ virsh snapshot-create-as f29tmp s1
3. Create external snapshot instead:
$ virsh snapshot-create-as f29tmp s1 --disk-only
4. Capture snapshot xml
$ virsh snapshot-dumpxml f29tmp s1 > s1.xml
5. Redefine snapshot on top of itself (should succeed as a no-op, proves the xml is valid):
$ virsh snapshot-create --redefine f29tmp s1.xml
6. Remove the snapshot metadata
$ virsh snapshot-delete --metadata f29tmp s1
7. Try to revive the snapshot definition (should work):
$ virsh snapshot-create --redefine f29tmp s1.xml
8. Try to revive the snapshot with --disk-only (should work):
$ virsh snapshot-create --redefine f29tmp s1.xml --disk-only

Actual results:
1. guest created
2. error: unsupported configuration: internal snapshot for disk vda unsupported for storage type raw
3. Domain snapshot s1 created
4. # s1.xml is valid
5. Domain snapshot s1 created from 's1.xml'
6. Domain snapshot s1 deleted
7. error: unsupported configuration: disk 'vda' must use snapshot mode 'internal'
8. error: invalid argument: disk-only flag for snapshot s1 requires disk-snapshot state


Expected results:
steps 1-6 are okay, but 7 and 8 should resemble step 5.

Additional info:
Testing for poor interaction of --redefine with pmsuspended guests is a bit trickier, as by default guests aren't allowed to pmsuspend (you have to edit the domain XML to allow it, and have qga running in the guest)

Comment 2 Eric Blake 2019-02-23 20:01:11 UTC
Upstream patches posted: https://www.redhat.com/archives/libvir-list/2019-February/msg01343.html

Comment 3 John Ferlan 2019-04-17 19:25:44 UTC
Part of the problem fixed by commit dafef600f4 and the other part by commit 3926d0aa4 for libvirt 5.1.0

$ git describe dafef600f4
v5.1.0-rc1-6-gdafef600f4
$ git describe 3926d0aa4
v5.1.0-rc2-6-g3926d0aa49
$

Comment 4 yisun 2019-04-24 05:20:05 UTC
Hi Eric,
I tried the scenario on rhel8av and didn't reproduce your comment0, pls help to check if I missed something. thx.
The steps not same as comment0 marked with "<===== NOT FAILED AS COMMENT 0 DESCRIBED"

[root@lenovo-sr630-06 images]# rpm -qa | grep libvirt-5
libvirt-5.0.0-7.module+el8+2887+effa3c42.x86_64

[root@lenovo-sr630-06 images]# pwd
/var/lib/libvirt/images

[root@lenovo-sr630-06 images]# qemu-img info raw.img
image: raw.img
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 1.4G

[root@lenovo-sr630-06 images]# virt-install --import --name=f29tmp --ram=2048 --os-variant=fedora29 --disk=path=raw.img,format=raw


[root@lenovo-sr630-06 images]# virsh list
 Id   Name     State
------------------------
 4    f29tmp   running


[root@lenovo-sr630-06 images]# virsh dumpxml f29tmp | grep disk -A6
...
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/raw.img'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
...


[root@lenovo-sr630-06 images]# virsh snapshot-create-as f29tmp s1
error: unsupported configuration: internal snapshot for disk vda unsupported for storage type raw



[root@lenovo-sr630-06 images]# virsh snapshot-create-as f29tmp s1 --disk-only
Domain snapshot s1 created

[root@lenovo-sr630-06 images]# virsh dumpxml f29tmp | grep disk -A10
...
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/raw.s1'/>
      <backingStore type='file' index='1'>
        <format type='raw'/>
        <source file='/var/lib/libvirt/images/raw.img'/>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>


[root@lenovo-sr630-06 images]# virsh snapshot-dumpxml f29tmp s1 > s1.xml
[root@lenovo-sr630-06 images]# cat s1.xml
<domainsnapshot>
  <name>s1</name>
  ...
  <disks>
    <disk name='vda' snapshot='external' type='file'>
      <driver type='qcow2'/>
      <source file='/var/lib/libvirt/images/raw.s1'/>
    </disk>
  </disks>
...
    <devices>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <disk type='file' device='disk'>
        <driver name='qemu' type='raw'/>
        <source file='/var/lib/libvirt/images/raw.img'/>
        <backingStore/>
        <target dev='vda' bus='virtio'/>
        <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
      </disk>
...


[root@lenovo-sr630-06 images]# virsh snapshot-create --redefine f29tmp s1.xml
Domain snapshot s1 created from 's1.xml'

[root@lenovo-sr630-06 images]# virsh snapshot-delete --metadata f29tmp s1
Domain snapshot s1 deleted

[root@lenovo-sr630-06 images]# virsh snapshot-create --redefine f29tmp s1.xml
Domain snapshot s1 created from 's1.xml'
<===== NOT FAILED AS COMMENT 0 DESCRIBED

[root@lenovo-sr630-06 images]# virsh snapshot-create --redefine f29tmp s1.xml --disk-only
Domain snapshot s1 created from 's1.xml'
<===== NOT FAILED AS COMMENT 0 DESCRIBED

Comment 5 Eric Blake 2019-04-24 20:30:09 UTC
(In reply to yisun from comment #4)

> [root@lenovo-sr630-06 images]# virt-install --import --name=f29tmp
> --ram=2048 --os-variant=fedora29 --disk=path=raw.img,format=raw
> 
> 
> [root@lenovo-sr630-06 images]# virsh list
>  Id   Name     State
> ------------------------
>  4    f29tmp   running

This is where you need to shut down the guest.  Otherwise, ...

> 
> [root@lenovo-sr630-06 images]# virsh snapshot-create-as f29tmp s1
> error: unsupported configuration: internal snapshot for disk vda unsupported
> for storage type raw
> 
> 
> 
> [root@lenovo-sr630-06 images]# virsh snapshot-create-as f29tmp s1 --disk-only
> Domain snapshot s1 created

...this is creating a runtime snapshot rather than an offline snapshot. The bug is only reproduced with offline snapshots.


> [root@lenovo-sr630-06 images]# virsh snapshot-dumpxml f29tmp s1 > s1.xml
> [root@lenovo-sr630-06 images]# cat s1.xml
> <domainsnapshot>
>   <name>s1</name>
>   ...
>   <disks>

To double-check, you also want to look at the <state> element (which you trimmed here).

Comment 6 yisun 2019-04-25 06:36:39 UTC
(In reply to Eric Blake from comment #5)
> (In reply to yisun from comment #4)
> 
> > [root@lenovo-sr630-06 images]# virt-install --import --name=f29tmp
> > --ram=2048 --os-variant=fedora29 --disk=path=raw.img,format=raw
> > 
> > 
> > [root@lenovo-sr630-06 images]# virsh list
> >  Id   Name     State
> > ------------------------
> >  4    f29tmp   running
> 
> This is where you need to shut down the guest.  Otherwise, ...

Thx Eric, reproduced with libvirt-5.0.0-7.module+el8+2887+effa3c42.x86_64

[root@lenovo-sr630-06 images]# qemu-img info raw.img 
image: raw.img
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 1.4G


[root@lenovo-sr630-06 images]# virt-install --import --name=f29tmp --ram=2048 --os-variant=fedora29 --disk=path=raw.img,format=raw


[root@lenovo-sr630-06 images]# virsh destroy f29tmp
Domain f29tmp destroyed


[root@lenovo-sr630-06 images]# virsh domstate f29tmp
shut off


[root@lenovo-sr630-06 images]# virsh snapshot-create-as f29tmp s1 --disk-only
Domain snapshot s1 created


[root@lenovo-sr630-06 images]# virsh dumpxml f29tmp | grep disk -A10
...
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/raw.s1'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>


[root@lenovo-sr630-06 images]# virsh snapshot-dumpxml f29tmp s1 > s1.xml


[root@lenovo-sr630-06 images]# cat s1.xml
<domainsnapshot>
  <name>s1</name>
  <state>shutoff</state>
...
  <disks>
    <disk name='vda' snapshot='external' type='file'>
      <driver type='qcow2'/>
      <source file='/var/lib/libvirt/images/raw.s1'/>
    </disk>
  </disks>
...
    <devices>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <disk type='file' device='disk'>
        <driver name='qemu' type='raw'/>
        <source file='/var/lib/libvirt/images/raw.img'/>
        <target dev='vda' bus='virtio'/>
        <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
      </disk>


[root@lenovo-sr630-06 images]# virsh snapshot-create --redefine f29tmp s1.xml
Domain snapshot s1 created from 's1.xml'


[root@lenovo-sr630-06 images]# virsh snapshot-delete --metadata f29tmp s1
Domain snapshot s1 deleted

[root@lenovo-sr630-06 images]# virsh snapshot-create --redefine f29tmp s1.xml
error: unsupported configuration: disk 'vda' must use snapshot mode 'internal'
<==== REPRODUCED

[root@lenovo-sr630-06 images]# virsh snapshot-create --redefine f29tmp s1.xml --disk-only
error: invalid argument: disk-only flag for snapshot s1 requires disk-snapshot state
<==== REPRODUCED

Comment 8 yisun 2019-06-14 06:59:45 UTC
verified:
[root@hp-dl320eg8-13 ~]# rpm -qa | egrep "qemu-kvm-4|libvirt-5"
libvirt-5.4.0-1.module+el8.1.0+3304+7eb41d5f.x86_64
python3-libvirt-5.4.0-1.module+el8.1.0+3305+28419a35.x86_64
qemu-kvm-4.0.0-4.module+el8.1.0+3356+cda7f1ee.x86_64


[root@hp-dl320eg8-13 ~]# qemu-img info /var/lib/libvirt/images/image.raw 
image: /var/lib/libvirt/images/image.raw
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 1.4G


[root@hp-dl320eg8-13 ~]# virt-install --import --name=vmvm --ram=2048 --os-variant=rhel7 --disk=path=/var/lib/libvirt/images/image.raw,format=raw
...

[root@hp-dl320eg8-13 ~]# virsh domstate vmvm
running

[root@hp-dl320eg8-13 ~]# virsh destroy vmvm; virsh domstate vmvm
Domain vmvm destroyed
shut off

[root@hp-dl320eg8-13 ~]# virsh snapshot-create-as vmvm s1 --disk-only
Domain snapshot s1 created

[root@hp-dl320eg8-13 ~]# virsh dumpxml vmvm | awk "/<disk/,/<\/disk/"
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/image.s1'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>

[root@hp-dl320eg8-13 ~]# virsh snapshot-dumpxml vmvm s1 > s1.xml


[root@hp-dl320eg8-13 ~]# cat s1.xml 
<domainsnapshot>
  <name>s1</name>
  <state>shutoff</state>
  <creationTime>1560495100</creationTime>
  <memory snapshot='no'/>
  <disks>
    <disk name='vda' snapshot='external' type='file'>
      <driver type='qcow2'/>
      <source file='/var/lib/libvirt/images/image.s1'/>
    </disk>
  </disks>
...
    <devices>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <disk type='file' device='disk'>
        <driver name='qemu' type='raw'/>
        <source file='/var/lib/libvirt/images/image.raw'/>
        <target dev='vda' bus='virtio'/>
        <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
      </disk>
...


[root@hp-dl320eg8-13 ~]# virsh snapshot-create --redefine vmvm s1.xml
Domain snapshot s1 created from 's1.xml'
[root@hp-dl320eg8-13 ~]# virsh snapshot-delete --metadata vmvm s1
Domain snapshot s1 deleted

[root@hp-dl320eg8-13 ~]# virsh snapshot-create --redefine vmvm s1.xml
Domain snapshot s1 created from 's1.xml'
[root@hp-dl320eg8-13 ~]# virsh snapshot-delete --metadata vmvm s1
Domain snapshot s1 deleted

[root@hp-dl320eg8-13 ~]# virsh snapshot-create --redefine vmvm s1.xml --disk-only
Domain snapshot s1 created from 's1.xml'

[root@hp-dl320eg8-13 ~]# virsh snapshot-list vmvm
 Name   Creation Time               State
---------------------------------------------
 s1     2019-06-14 02:51:40 -0400   shutoff

Comment 10 errata-xmlrpc 2019-11-06 07:12:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3723


Note You need to log in before you can comment on or make changes to this bug.