Description of problem: Faild to blockcommit snapshot created with disk on multi-host glusterfs volume. Version-Release number of selected component (if applicable): libvirt-4.5.0-6.el7.x86_64 qemu-kvm-rhev-2.12.0-9.el7.x86_64 How reproducible: 100% Steps to reproduce: 1.Create vm with disk on multi-host glusterfs volume: #virsh dumpxml iommu1 <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writethrough'/> <source file='/nfs-images/yafu/76.qcow2'/> <backingStore/> <target dev='vda' bus='scsi'/> <alias name='scsi0-0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> ***<disk type='network' device='disk'> <driver name='qemu' type='qcow2'/> <source protocol='gluster' name='gluster-vol1/test.qcow2'> <host name='10.73.130.49'/> <host name='10.66.4.101'/> <host name='10.66.70.111'/> </source> <backingStore/> <target dev='vdi' bus='virtio'/>> <alias name='virtio-disk8'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </disk>*** 2.Do external snapshot: #virsh snapshot-create-as iommu1 s1 --disk-only --diskspec vdi,file=/var/lib/libvirt/images/vdi.s1 --diskspec vda,snapshot=no Domain snapshot s1 created 3.# virsh snapshot-list iommu1 Name Creation Time State ------------------------------------------------------------ s1 2018-07-30 08:57:04 -0400 disk-snapshot #virsh dumpxml iommu1 ... <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/vdi.s1'/> <backingStore type='network' index='1'> <format type='qcow2'/> <source protocol='gluster' name='gluster-vol1/test.qcow2'> <host name='10.73.130.49' port='24007'/> <host name='10.66.4.101' port='24007'/> <host name='10.66.70.111' port='24007'/> </source> <backingStore/> </backingStore> <target dev='vdi' bus='virtio'/> <alias name='virtio-disk8'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </disk> ... 4.Do blockcommit: # virsh blockcommit iommu1 vdi --pivot error: internal error: unable to execute QEMU command 'block-commit': invalid URI json:{"server.0.host": "10.73.130.49", "server.1.host": "10.66.4.101", "server.2.host": "10.66.70.111", "driver": "gluster", "path": "test.qcow2", "server.0.type": "tcp", "server.1.type": "tcp", "server.2.type": "tcp", "server.0.port": "24007", "server.1.port": "24007", "server.2.port": "24007", "volume": "gluster-vol1", "debug": "4"} Actual results: Faild to blockcommit snapshot created with disk on multi-host glusterfs volume. Expected results: Should do blockcommit successfully.
This bug is going to be addressed in the next major release.
Since commit 3f93884a4d047a012b968c62b94ea07dadd1759b Author: Peter Krempa <pkrempa> Date: Mon Jul 22 13:39:24 2019 +0200 qemu: Add -blockdev support for block commit job Introduce the handler for finalizing a block commit and active bloc commit job which will allow to use it with blockdev. we use top-node/base-node to refer to the parts of the backing chain to commit rather than the file names. This allows to select backing chain members regardless of their apparent filename which may be rejected in qemu in some instances. The blockdev feature was enabled since: commit c6a9e54ce3252196f1fc6aa9e57537a659646d18 Author: Peter Krempa <pkrempa> Date: Mon Jan 7 11:45:19 2019 +0100 qemu: enable blockdev support Now that all pieces are in place (hopefully) let's enable -blockdev. We base the capability on presence of the fix for 'auto-read-only' on files so that blockdev works properly, mandate that qemu supports explicit SCSI id strings to avoid ABI regression and that the fix for 'savevm' is present so that internal snapshots work. v5.9.0-390-gc6a9e54ce3 and requires upstream qemu-4.2 or appropriate downstream.
Test Result: PASS [root@hp-dl320eg8-05 ~]# rpm -qa | egrep "^libvirt-6|^qemu-kvm-4" qemu-kvm-4.2.0-10.module+el8.2.0+5740+c3dff59e.x86_64 libvirt-6.0.0-5.module+el8.2.0+5765+64816f89.x86_64 1. Prepare a disk image on gluster fs server [root@hp-dl320eg8-05 ~]# qemu-img info gluster://192.168.122.75/gluster-vol1/vdb.qcow2 image: gluster://192.168.122.75/gluster-vol1/vdb.qcow2 file format: qcow2 virtual size: 8 GiB (8589934592 bytes) disk size: 192 KiB cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false 2. Use it as vdb of vm [root@hp-dl320eg8-05 ~]# virsh dumpxml vm1 | awk '/<disk/,/<\/disk/' … <disk type='network' device='disk'> <driver name='qemu' type='qcow2'/> <source protocol='gluster' name='gluster-vol1/vdb.qcow2' index='1'> <host name='192.168.122.99' port='24007'/> <host name='192.168.122.75' port='24007'/> <host name='192.168.122.100’ port='24007'/> </source> <backingStore/> <target dev='vdb' bus='virtio'/> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </disk> 3. Create a snapshot of vdb [root@hp-dl320eg8-05 ~]# virsh snapshot-create-as vm1 snap1 --disk-only --diskspec vdb,file=/tmp/vdb.s1 --diskspec vda,snapshot=no Domain snapshot snap1 created [root@hp-dl320eg8-05 ~]# virsh dumpxml vm1 | awk '/<disk/,/<\/disk/' … <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/tmp/vdb.s1' index='3'/> <backingStore type='network' index='1'> <format type='qcow2'/> <source protocol='gluster' name='gluster-vol1/vdb.qcow2'> <host name='192.168.122.99' port='24007'/> <host name='192.168.122.75' port='24007'/> <host name='192.168.122.100’ port='24007'/> </source> <backingStore/> </backingStore> <target dev='vdb' bus='virtio'/> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </disk> 4. Do blockcommit [root@hp-dl320eg8-05 ~]# virsh blockcommit vm1 vdb --pivot Successfully pivoted 5. Check the vm’s xml is correct, all gluster server info still existing in vm’s xml [root@hp-dl320eg8-05 ~]# virsh dumpxml vm1 | awk '/<disk/,/<\/disk/' … <disk type='network' device='disk'> <driver name='qemu' type='qcow2'/> <source protocol='gluster' name='gluster-vol1/vdb.qcow2' index='1'> <host name='192.168.122.99' port='24007'/> <host name='192.168.122.75' port='24007'/> <host name='192.168.122.100’ port='24007'/> </source> <backingStore/> <target dev='vdb' bus='virtio'/> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </disk> 6. Check the qemu process is correct, all gluster server info existing [root@hp-dl320eg8-05 ~]# ps -ef | grep vm1 qemu 2310 1 7 07:22 ? 00:00:34 /usr/libexec/qemu-kvm -name guest=vm1,… -blockdev {"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null} -device virtio-blk-pci,scsi=off,bus=pci.4,addr=0x0,drive=libvirt-2-format,id=virtio-disk0,bootindex=1 -blockdev {"driver":"gluster","volume":"gluster-vol1","path":"vdb.qcow2","server":[{"type":"inet","host":"192.168.122.99","port":"24007"},{"type":"inet","host":"192.168.122.75","port":"24007"},{"type":"inet","host":"192.168.122.100","port":"24007"}],"debug":4,"node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}…
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2017