Bug 1658981 - qemu failed to create internal snapshot via 'savevm' when using blockdev
Summary: qemu failed to create internal snapshot via 'savevm' when using blockdev
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: pre-dev-freeze
: ---
Assignee: Kevin Wolf
QA Contact: Tingting Mao
URL:
Whiteboard:
Depends On:
Blocks: 760547 1660572
TreeView+ depends on / blocked
 
Reported: 2018-12-13 10:19 UTC by xianwang
Modified: 2020-02-04 18:29 UTC (History)
14 users (show)

Fixed In Version: qemu-kvm-4.1.0-16.module+el8.1.1+4917+752cfd65
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1660572 (view as bug list)
Environment:
Last Closed: 2020-02-04 18:28:48 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:0404 0 None None None 2020-02-04 18:29:57 UTC

Description xianwang 2018-12-13 10:19:02 UTC
Description of problem:
Failed to create internal snapshot on blockdev

Version-Release number of selected component (if applicable):
Host:
4.18.0-50.el8.x86_64
qemu-kvm-3.1.0-0.module+el8+2266+616cf026.next.candidate.x86_64
seabios-bin-1.11.1-2.module+el8+2173+537e5cb5.noarch

Guest:
4.18.0-51.el8.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Boot a guest with blockdev
 -blockdev driver=file,cache.direct=on,cache.no-flush=off,filename=/home/rhel80-64-virtio-scsi.qcow2,node-name=drive_scsi1 \
 -blockdev driver=qcow2,node-name=drive_scsi11,file=drive_scsi1 \
 -device scsi-hd,id=image1,drive=drive_scsi11,bus=virtio_scsi_pci0.0,channel=0,scsi-id=0,lun=0,bootindex=0 \

2.Create an internal snapshot via "save_vm"
qmp:
{"execute": "human-monitor-command", "arguments": {"command-line": "savevm aa1"}, "id": "WKIXiJxb"}
{"return": "Device '' is writable but does not support snapshots\r\n", "id": "WKIXiJxb"}
same with hmp:
(qemu) info block
drive_scsi11: /home/rhel80-64-virtio-scsi.qcow2 (qcow2)
    Attached to:      image1
    Cache mode:       writeback
(qemu) savevm ab
Device '' is writable but does not support snapshots

3.

Actual results:
Failed to create internal snapshot
{"return": "Device '' is writable but does not support snapshots\r\n", "id": "WKIXiJxb"}

Expected results:
succeed to create internal snapshot

Additional info:

Comment 1 Ademar Reis 2018-12-14 16:33:17 UTC
Is it a regression from 3.1, or does it happen with qemu-2.12 (RHEL-8.0) as well?

We're deprecating internal snapshots, but given libvirt still defaults to them in virt-manager and virsh, we'll need it clonned if it happens in RHEL8.

Comment 2 xianwang 2018-12-18 03:34:29 UTC
(In reply to Ademar Reis from comment #1)
> Is it a regression from 3.1, or does it happen with qemu-2.12 (RHEL-8.0) as
> well?
> 
> We're deprecating internal snapshots, but given libvirt still defaults to
> them in virt-manager and virsh, we'll need it clonned if it happens in RHEL8.

Hi, 
This issue also exists on qemu-2.12 (RHEL-8.0). what's more, savevm works well for "drive", this issue is only for "blockdev".

qemu build:
4.18.0-51.el8.x86_64
qemu-kvm-2.12.0-46.module+el8+2351+e14a4632.x86_64

Comment 3 CongLi 2018-12-18 06:56:54 UTC
Refer:
https://bugzilla.redhat.com/show_bug.cgi?id=1621944#c16

Recording from Kevin:
"""
(In reply to xianwang from comment #14)
> {"execute": "human-monitor-command", "arguments": {"command-line": "savevm
> sn_test"}, "id": "WKIXiJxb"}    
> {"return": "Device '' is writable but does not support snapshots\r\n", "id":
> "WKIXiJxb"}

I think this is a QEMU bug, though I'm not sure what the correct behaviour would be.

Currently it tries to snapshot all nodes that are either created explicitly by the user or that are root nodes of BlockBackends (e.g. attached to a guest device). User created nodes includes the driver=file node in your example, and this node doesn't support snapshots. This is obviously not useful behaviour.

Maybe the best course of action would be to restrict the snapshotting to nodes that are roots of BlockBackends. This would, however, still include root nodes of block jobs, NBD servers, etc. This won't usually result in errors like here because the affected nodes will still be format level nodes, but could still be unexpected. Another option would be to further restrict this to BlockBackends that are actually attached to a device. Also, either option would exclude explicitly created qcow2 nodes which are not attached to any BlockBackend (e.g. -blockdev without using the node name anywhere else), which may or may not be what users expect.

Another possible option: Snapshot only if it is either a BlockBackend root or doesn't have any BDS parents.
"""

Comment 4 Tingting Mao 2019-01-18 06:48:05 UTC
Hi,

In latest test, I noticed that boot the guest with -blockdev with one line, 'savevm' could works normally[1]. While with two lines, it failed[2] as comment 0.


[1]
# /usr/libexec/qemu-kvm \
        -name 'guest-rhel6.10' \
        -machine q35 \
        -nodefaults \
        -vga qxl \
        -vnc :1 \
        -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,addr=0x3 \
        ****-blockdev driver=qcow2,cache.direct=off,cache.no-flush=on,file.filename=base.qcow2,file.driver=file,node-name=my_file \ ****
        -device scsi-hd,drive=my_file \
        -monitor stdio \
        -m 8192 \
        -smp 8 \
        -device virtio-net-pci,mac=9a:b5:b6:b1:b2:b3,id=idMmq1jH,vectors=4,netdev=idxgXAlm,bus=pcie.0,addr=0x9  \
        -netdev tap,id=idxgXAlm \
        -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/timao/monitor-qmpmonitor1-20180220-094308-h9I6hRsI,server,nowait \
        -mon chardev=qmp_id_qmpmonitor1,mode=control  \
        -device pcie-root-port,id=pcie.0-root-port-8,slot=8,chassis=8,addr=0x8,bus=pcie.0 \

# qmp:
{"QMP": {"version": {"qemu": {"micro": 0, "minor": 1, "major": 3}, "package": "qemu-kvm-3.1.0-4.module+el8+2681+819ab34d"}, "capabilities": []}}
{"execute": "qmp_capabilities"}
{"return": {}}

{"execute":"human-monitor-command","arguments":{"command-line":"savevm sn1"}}
{"timestamp": {"seconds": 1547792724, "microseconds": 337572}, "event": "STOP"}
{"timestamp": {"seconds": 1547792736, "microseconds": 226926}, "event": "RESUME"}
{"return": ""}
     


[2]
# /usr/libexec/qemu-kvm \
        -name 'guest-rhel6.10' \
        -machine q35 \
        -nodefaults \
        -vga qxl \
        -vnc :1 \
        -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,addr=0x3 \
       **** -blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=base.qcow2,node-name=my_file \ ****
       **** -blockdev driver=qcow2,file=my_file,node-name=my \ ****
        -device scsi-hd,drive=my \
        -monitor stdio \
        -m 8192 \
        -smp 8 \
        -device virtio-net-pci,mac=9a:b5:b6:b1:b2:b3,id=idMmq1jH,vectors=4,netdev=idxgXAlm,bus=pcie.0,addr=0x9  \
        -netdev tap,id=idxgXAlm \
        -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/timao/monitor-qmpmonitor1-20180220-094308-h9I6hRsI,server,nowait \
        -mon chardev=qmp_id_qmpmonitor1,mode=control  \
        -device pcie-root-port,id=pcie.0-root-port-8,slot=8,chassis=8,addr=0x8,bus=pcie.0 \

# qmp:
{"QMP": {"version": {"qemu": {"micro": 0, "minor": 1, "major": 3}, "package": "qemu-kvm-3.1.0-4.module+el8+2681+819ab34d"}, "capabilities": []}}
{"execute": "qmp_capabilities"}
{"return": {}}


{"execute":"human-monitor-command","arguments":{"command-line":"savevm sn1"}}
{"return": "Device '' is writable but does not support snapshots\r\n"}

Comment 5 yujie ma 2019-04-01 08:44:43 UTC
hi,
I have the same question in rhev7.7

Description of problem:

Failed to  create internal snapshot with command 'savevm' and check the snapshot 
If not supported, use:
{"execute":"human-monitor-command","arguments":{"command-line":"savevm sn1"}}

Host:
qemu-kvm-rhev-2.12.0-25.el7
Red Hat Enterprise Linux Server release 7.7 Beta (Maipo)

Guest:
win2016

Steps to Reproduce:

1.Boot a guest with win16.qcow2
# qemu-img info win16.qcow2 
image: win16.qcow2
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 11G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

/usr/libexec/qemu-kvm -name 'g-win2016' -machine q35  -nodefaults  -vga std -cdrom /home/nfs/iso/winutils.iso -device virtio-scsi-pci,id=scsi1,bus=pcie.0,addr=0x6  -blockdev driver=raw,file.driver=file,cache.direct=off,cache.no-flush=on,file.filename=/home/nfs/iso/en_windows_server_2016_updated_feb_2018_x64_dvd_11636692.iso,node-name=drive3,read-only=on -device scsi-cd,drive=drive3,id=data-disk1,bus=scsi1.0 -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,addr=0x3 -blockdev driver=file,cache.direct=on,cache.no-flush=off,filename=/home/nfs/images/win16.qcow2,node-name=my_file1 -blockdev driver=qcow2,node-name=my1,file=my_file1 -device scsi-hd,drive=my1,bootindex=0 -vnc :0  -monitor stdio -m 8192 -smp 8 -device virtio-net-pci,mac=9a:b5:b6:b1:b2:b3,id=idMmq1jH,vectors=4,netdev=idxgXAlm,bus=pcie.0,addr=0x9  -netdev tap,id=idxgXAlm -chardev socket,id=qmp_id_qmpmonitor1,path=/home/nfs/yujma_win/monitor-qmpmonitor1-20180220-094308-h9I6hRsI,server,nowait -mon chardev=qmp_id_qmpmonitor1,mode=control -device nec-usb-xhci,id=usb1,bus=pcie.0,addr=0x2 -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 

2.

[root@dell-per740-01 yujma_win]# nc -U monitor-qmpmonitor1-20180220-094308-h9I6hRsI 
{"QMP": {"version": {"qemu": {"micro": 0, "minor": 12, "major": 2}, "package": "qemu-kvm-rhev-2.12.0-25.el7"}, "capabilities": []}}

{"execute": "qmp_capabilities"}
{"return": {}}


{"execute":"human-monitor-command","arguments":{"command-line":"savevm sn1"}}
{"return": "Device '' is writable but does not support snapshots\r\n"}


Expected results:
Internal snapshot should be created and shown successfully


Do we need to clone this bug to rhel7.7?

Comment 7 Ademar Reis 2019-05-22 13:37:25 UTC
-blockdev is not being used in base RHEL yet (7 or 8) and internal snapshots are not used in layered products.

So this BZ has a low priority, because it doesn't affect any customers in supported scenarios. Hopefully by the time libvirt supports blockdev in a supported product, it'll also default to external snapshots, but it's too soo to tell.

Leaving it in the backlog with medium priority for now. No ITR.

Comment 8 Peter Krempa 2019-07-24 10:56:50 UTC
Even when using external snapshots by default we can't just break existing functionality. Also full external snapshot support (reverting, deleting) requires a lot of work as this functionality was never implemented in the first place.

Comment 9 Peter Krempa 2019-09-03 14:26:39 UTC
I analyzed the bug for a bit and the issue is that the internal snapshot functions use bdrv_next to iterate through the disks. The problem is that with blockdev even backing file and protocol nodes are explicitly instantiated and thus are part of the monitor_nodes list. This means that they all are checked whether internal snapshots are supported. This obviously won't work as the 'file' driver for example certainly does not support snapshots despite the fact that it has a qcow2 format. Additionally it also does not make sense to take snapshots of any of the qcow2 layers hidden behind the top of the chain

The replacement for bdrv_next thus must only iterate the top elements of chains for this to work properly. It is okay though to iterate through chain tops which are not assigned to a block backend.

Comment 10 Kevin Wolf 2019-11-15 14:46:18 UTC
This should be fixed as of upstream commit 05f4aced658a ('block/snapshot: Restrict set of snapshot nodes').

Comment 14 Tingting Mao 2019-11-28 05:50:32 UTC
Tried to verify this bug as below.


Tested with:
qemu-kvm-4.1.0-16.module+el8.1.1+4917+752cfd65
kernel-4.18.0-147.el8.x86_64


Steps:
1. Boot guest with below cml.
# /usr/libexec/qemu-kvm \
        -name 'guest' \
        -machine q35 \
        -nodefaults \
        -vga qxl \
        -vnc :0 \
        -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,addr=0x3 \
        -blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=rhel77-64-virtio-scsi.qcow2,node-name=my_file \
        -blockdev driver=qcow2,file=my_file,node-name=my \
        -device scsi-hd,drive=my \
        -monitor stdio \
        -m 8192 \
        -smp 8 \
        -device virtio-net-pci,mac=9a:b5:b6:b1:b2:b3,id=idMmq1jH,vectors=4,netdev=idxgXAlm,bus=pcie.0,addr=0x9  \
        -netdev tap,id=idxgXAlm \
        -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/timao/monitor-qmpmonitor1-20180220-094308-h9I6hRsI,server,nowait \
        -mon chardev=qmp_id_qmpmonitor1,mode=control  \
        -device pcie-root-port,id=pcie.0-root-port-8,slot=8,chassis=8,addr=0x8,bus=pcie.0 \

2. Connect to the socket and operate the internal snapshot file.
# nc -U monitor-qmpmonitor1-20180220-094308-h9I6hRsI
{"QMP": {"version": {"qemu": {"micro": 0, "minor": 1, "major": 4}, "package": "qemu-kvm-4.1.0-16.module+el8.1.1+4917+752cfd65"}, "capabilities": ["oob"]}}
{"execute": "qmp_capabilities"}
{"return": {}}


{"execute":"human-monitor-command","arguments":{"command-line":"info snapshots"}}
{"return": "There is no snapshot available.\r\n"}

{"execute":"human-monitor-command","arguments":{"command-line":"savevm sn1"}}
{"timestamp": {"seconds": 1574919961, "microseconds": 879229}, "event": "STOP"}
{"timestamp": {"seconds": 1574919965, "microseconds": 864265}, "event": "RESUME"}
{"return": ""}

{"execute":"human-monitor-command","arguments":{"command-line":"info snapshots"}}
{"return": "List of snapshots present on all disks:\r\nID        TAG                 VM SIZE                DATE       VM CLOCK\r\n--        sn1                 1.57 GiB 2019-11-28 00:46:01   00:01:37.625\r\n"}


{"execute":"human-monitor-command","arguments":{"command-line":"loadvm sn1"}}
{"timestamp": {"seconds": 1574920028, "microseconds": 555526}, "event": "STOP"}
{"timestamp": {"seconds": 1574920035, "microseconds": 67712}, "event": "RESUME"}
{"return": ""}

{"execute":"human-monitor-command","arguments":{"command-line":"info snapshots"}}
{"return": "List of snapshots present on all disks:\r\nID        TAG                 VM SIZE                DATE       VM CLOCK\r\n--        sn1                 1.57 GiB 2019-11-28 00:46:01   00:01:37.625\r\n"}

{"execute":"human-monitor-command","arguments":{"command-line":"delvm sn1"}}
{"return": ""}

{"execute":"human-monitor-command","arguments":{"command-line":"info snapshots"}}
{"return": "There is no snapshot available.\r\n"}



Result:
As above, create/load/delete internal snapshot file successfully.

Comment 16 errata-xmlrpc 2020-02-04 18:28:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0404


Note You need to log in before you can comment on or make changes to this bug.