Bug 1805143
Summary: | allow late/lazy opening of backing chain for shallow blockdev-mirror | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | Peter Krempa <pkrempa> |
Component: | qemu-kvm | Assignee: | Kevin Wolf <kwolf> |
qemu-kvm sub component: | Block Jobs | QA Contact: | aihua liang <aliang> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | urgent | ||
Priority: | urgent | CC: | aliang, bzlotnik, chhu, coli, ddepaula, fjin, hreitz, jinzhao, juzhang, kwolf, lmen, lmiksik, mtessun, nsoffer, pkrempa, virt-maint, xuzhang, yafu, yisun, ymankad |
Version: | 8.2 | ||
Target Milestone: | rc | ||
Target Release: | 8.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | qemu-kvm-4.2.0-15.module+el8.2.0+6029+618ef2ec | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1803092 | Environment: | |
Last Closed: | 2020-05-05 09:57:19 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1785939, 1798072, 1803092 |
Description
Peter Krempa
2020-02-20 11:14:19 UTC
QA_ACK, please? Test on qemu-kvm-4.2.0-15.module+el8.2.0+6029+618ef2ec, qemu-img convert still fail for "write" lock issue. Test Steps: 1. Start src guest with qemu cmds: /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -sandbox on \ -machine q35 \ -nodefaults \ -device VGA,bus=pcie.0,addr=0x1 \ -m 14336 \ -smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2 \ -cpu 'EPYC',+kvm_pv_unhalt \ -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20200203-033416-61dmcn92,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20200203-033416-61dmcn92,server,nowait \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=idy8YPXp \ -object iothread,id=iothread0 \ -chardev socket,path=/var/tmp/serial-serial0-20200203-033416-61dmcn92,server,nowait,id=chardev_serial0 \ -device isa-serial,id=serial0,chardev=chardev_serial0 \ -chardev socket,id=seabioslog_id_20200203-033416-61dmcn92,path=/var/tmp/seabios-20200203-033416-61dmcn92,server,nowait \ -device isa-debugcon,chardev=seabioslog_id_20200203-033416-61dmcn92,iobase=0x402 \ -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \ -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \ -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \ -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/rhel820-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \ -device virtio-blk-pci,id=image1,drive=drive_image1,write-cache=on,bus=pcie.0-root-port-3,iothread=iothread0 \ -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \ -blockdev node-name=file_data1,driver=file,aio=threads,filename=/home/data.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_data1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_data1 \ -device virtio-blk-pci,id=data1,drive=drive_data1,write-cache=on,bus=pcie.0-root-port-6 \ -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \ -device virtio-net-pci,mac=9a:6c:ca:b7:36:85,id=idz4QyVp,netdev=idNnpx5D,bus=pcie.0-root-port-4,addr=0x0 \ -netdev tap,id=idNnpx5D,vhost=on \ -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/linux/RHEL8.2.0-BaseOS-x86_64.iso,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \ -device ide-cd,id=cd1,drive=drive_cd1,write-cache=on \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -vnc :0 \ -rtc base=utc,clock=host,driftfix=slew \ -boot menu=off,order=cdn,once=c,strict=off \ -enable-kvm \ -device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \ -monitor stdio \ -qmp tcp:0:3000,server,nowait \ 2. Create snapshot on "drive_image1" {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn1','size':21474836480},'job-id':'job1'}} {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/root/sn1'}} {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':21474836480},'job-id':'job2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1'}} {'execute':'job-dismiss','arguments':{'id':'job1'}} {'execute':'job-dismiss','arguments':{'id':'job2'}} {"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay":"sn1"}} 3. DD file in guest (guest)# dd if=/dev/urandom of=t bs=1M count=1000 4. Create snapshot chain in dst #qemu-img create -f qcow2 mirror.qcow2 20G #qemu-img create -f qcow2 mirror_sn.qcow2 -b mirror.qcow2 20G 5. Do block mirror with sync "top" {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_mirror_sn','filename':'/home/mirror_sn.qcow2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'mirror_sn','file':'drive_mirror_sn'}} {"execute": "blockdev-mirror", "arguments": {"sync": "top", "device": "sn1","target": "mirror_sn", "job-id": "j1"}} {"timestamp": {"seconds": 1584584034, "microseconds": 385957}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}} {"timestamp": {"seconds": 1584584034, "microseconds": 386063}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}} {"return": {}} {"timestamp": {"seconds": 1584584035, "microseconds": 291865}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "j1"}} {"timestamp": {"seconds": 1584584035, "microseconds": 291993}, "event": "BLOCK_JOB_READY", "data": {"device": "j1", "len": 1051852800, "offset": 1051852800, "speed": 0, "type": "mirror"}} 6. qemu-img convert from src to dst. #qemu-img convert -f qcow2 -O qcow2 rhel820-64-virtio-scsi.qcow2 mirror.qcow2 qemu-img: mirror.qcow2: error while converting qcow2: Failed to get "write" lock Is another process using the image [mirror.qcow2]? Hi, Peter Verification is failed, please help to check if test steps above are correct here, thanks. BR, Aliang Hi, Aliang [...] > 4. Create snapshot chain in dst > #qemu-img create -f qcow2 mirror.qcow2 20G > #qemu-img create -f qcow2 mirror_sn.qcow2 -b mirror.qcow2 20G > > 5. Do block mirror with sync "top" > > {'execute':'blockdev-add','arguments':{'driver':'file','node-name': > 'drive_mirror_sn','filename':'/home/mirror_sn.qcow2'}} > > {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name': > 'mirror_sn','file':'drive_mirror_sn'}} This step actually opens 'mirror.qcow2' which must be skipped until later. The idea is that backing files of 'mirror_sn.qcow2' are installed right before we call job-complete. You must add "backing": null here to prevent opening the backing chain. This is another one of the scenarios where some libvirt interaction allows it to work properly. > {"execute": "blockdev-mirror", "arguments": {"sync": "top", "device": > "sn1","target": "mirror_sn", "job-id": "j1"}} > {"timestamp": {"seconds": 1584584034, "microseconds": 385957}, "event": > "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}} > {"timestamp": {"seconds": 1584584034, "microseconds": 386063}, "event": > "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}} > {"return": {}} > {"timestamp": {"seconds": 1584584035, "microseconds": 291865}, "event": > "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "j1"}} > {"timestamp": {"seconds": 1584584035, "microseconds": 291993}, "event": > "BLOCK_JOB_READY", "data": {"device": "j1", "len": 1051852800, "offset": > 1051852800, "speed": 0, "type": "mirror"}} > > 6. qemu-img convert from src to dst. > #qemu-img convert -f qcow2 -O qcow2 rhel820-64-virtio-scsi.qcow2 > mirror.qcow2 > qemu-img: mirror.qcow2: error while converting qcow2: Failed to get "write" > lock > Is another process using the image [mirror.qcow2]? So at this point this should work since the backing file will not be opened yet. At this point after the qemu-img finishes copying of the backing data you then blockdev-add the backing file: {'execute':'blockdev-add','arguments':{'driver':'file','node-name': 'drive_mirror_backing','filename':'/home/mirror.qcow2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name': 'mirror_backing','file':'drive_mirror_backing'}} And then use 'blockdev-snapshot' to install it as backing of the mirror_sn.qcow2 image: {"execute":"blockdev-snapshot","arguments":{"node":"mirror_backing","overlay":"mirror_sn"}} After this step you can (block)job-complete the mirror and unplug the original chain. The guest should still see the same data. I hope I didn't do any mistake in the above steps. Hi, Peter Please help to check if the following test steps are correct. Test Steps: 1. Start guest with qemu cmds: /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -sandbox on \ -machine q35 \ -nodefaults \ -device VGA,bus=pcie.0,addr=0x1 \ -m 14336 \ -smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2 \ -cpu 'EPYC',+kvm_pv_unhalt \ -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20200203-033416-61dmcn92,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20200203-033416-61dmcn92,server,nowait \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=idy8YPXp \ -object iothread,id=iothread0 \ -chardev socket,path=/var/tmp/serial-serial0-20200203-033416-61dmcn92,server,nowait,id=chardev_serial0 \ -device isa-serial,id=serial0,chardev=chardev_serial0 \ -chardev socket,id=seabioslog_id_20200203-033416-61dmcn92,path=/var/tmp/seabios-20200203-033416-61dmcn92,server,nowait \ -device isa-debugcon,chardev=seabioslog_id_20200203-033416-61dmcn92,iobase=0x402 \ -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \ -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \ -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \ -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/rhel820-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \ -device virtio-blk-pci,id=image1,drive=drive_image1,write-cache=on,bus=pcie.0-root-port-3,iothread=iothread0 \ -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \ -blockdev node-name=file_data1,driver=file,aio=threads,filename=/home/data.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_data1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_data1 \ -device virtio-blk-pci,id=data1,drive=drive_data1,write-cache=on,bus=pcie.0-root-port-6 \ -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \ -device virtio-net-pci,mac=9a:6c:ca:b7:36:85,id=idz4QyVp,netdev=idNnpx5D,bus=pcie.0-root-port-4,addr=0x0 \ -netdev tap,id=idNnpx5D,vhost=on \ -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/linux/RHEL8.2.0-BaseOS-x86_64.iso,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \ -device ide-cd,id=cd1,drive=drive_cd1,write-cache=on \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -vnc :0 \ -rtc base=utc,clock=host,driftfix=slew \ -boot menu=off,order=cdn,once=c,strict=off \ -enable-kvm \ -device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \ -monitor stdio \ -qmp tcp:0:3000,server,nowait \ 2. Create snapshot sn1 {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn1','size':21474836480},'job-id':'job1'}} {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/root/sn1'}} {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':21474836480},'job-id':'job2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1'}} {'execute':'job-dismiss','arguments':{'id':'job1'}} {'execute':'job-dismiss','arguments':{'id':'job2'}} {"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay":"sn1"}} 3. DD file in guest, record its md5sum value (guest)# dd if=/dev/urandom of=t bs=1M count=1000 #md5sum t > sum1 4. Create snapshot chain in dst #qemu-img create -f qcow2 mirror.qcow2 20G #qemu-img create -f qcow2 mirror_sn.qcow2 -b mirror.qcow2 20G 5. Add target node with backing:null and do block mirror with sync "top" {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_mirror_sn','filename':'/home/mirror_sn.qcow2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'mirror_sn','file':'drive_mirror_sn','backing':null}} {"execute": "blockdev-mirror", "arguments": {"sync": "top", "device": "sn1","target": "mirror_sn", "job-id": "j1"}} {"timestamp": {"seconds": 1584584034, "microseconds": 385957}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}} {"timestamp": {"seconds": 1584584034, "microseconds": 386063}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}} {"return": {}} {"timestamp": {"seconds": 1584584035, "microseconds": 291865}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "j1"}} {"timestamp": {"seconds": 1584584035, "microseconds": 291993}, "event": "BLOCK_JOB_READY", "data": {"device": "j1", "len": 1051852800, "offset": 1051852800, "speed": 0, "type": "mirror"}} 6. qemu-img convert from src to dst. #qemu-img convert -f qcow2 -O qcow2 /home/kvm_autotest_root/images/rhel820-64-virtio-scsi.qcow2 /home/mirror.qcow2 7. Do snapshot from mirror to mirror_sn {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_mirror','filename':'/home/mirror.qcow2'}} {"return": {}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'mirror','file':'drive_mirror'}} {"return": {}} {"execute":"blockdev-snapshot","arguments":{"node":"mirror","overlay":"mirror_sn"}} 8. Start guest in dst with -incoming setting /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -sandbox on \ -machine q35 \ -nodefaults \ -device VGA,bus=pcie.0,addr=0x1 \ -m 14336 \ -smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2 \ -cpu 'EPYC',+kvm_pv_unhalt \ -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20200203-033416-61dmcn93,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20200203-033416-61dmcn92,server,nowait \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=idy8YPXp \ -object iothread,id=iothread0 \ -chardev socket,path=/var/tmp/serial-serial0-20200203-033416-61dmcn92,server,nowait,id=chardev_serial0 \ -device isa-serial,id=serial0,chardev=chardev_serial0 \ -chardev socket,id=seabioslog_id_20200203-033416-61dmcn92,path=/var/tmp/seabios-20200203-033416-61dmcn92,server,nowait \ -device isa-debugcon,chardev=seabioslog_id_20200203-033416-61dmcn92,iobase=0x402 \ -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \ -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \ -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \ -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/mirror_sn.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \ -device virtio-blk-pci,id=image1,drive=drive_image1,write-cache=on,bus=pcie.0-root-port-3,iothread=iothread0 \ -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \ -blockdev node-name=file_data1,driver=file,aio=threads,filename=/home/data.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_data1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_data1 \ -device virtio-blk-pci,id=data1,drive=drive_data1,write-cache=on,bus=pcie.0-root-port-6 \ -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \ -device virtio-net-pci,mac=9a:6c:ca:b7:36:85,id=idz4QyVp,netdev=idNnpx5D,bus=pcie.0-root-port-4,addr=0x0 \ -netdev tap,id=idNnpx5D,vhost=on \ -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/linux/RHEL8.2.0-BaseOS-x86_64.iso,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \ -device ide-cd,id=cd1,drive=drive_cd1,write-cache=on \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -vnc :1 \ -rtc base=utc,clock=host,driftfix=slew \ -boot menu=off,order=cdn,once=c,strict=off \ -enable-kvm \ -device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \ -monitor stdio \ -qmp tcp:0:3001,server,nowait \ -incoming tcp:0:5000 \ 9. Set migration capabilities in both src and dst. {"execute":"migrate-set-capabilities","arguments":{"capabilities":[{"capability":"pause-before-switchover","state":true}]}} 10. Migration from src to dst. {"execute": "migrate","arguments":{"uri": "tcp:10.73.196.71:5000"}} {"return": {}} {"timestamp": {"seconds": 1584610188, "microseconds": 641582}, "event": "STOP"} 11. Cancel block jobs in src {"execute":"block-job-cancel","arguments":{"device":"j1"}} {"return": {}} {"timestamp": {"seconds": 1584610193, "microseconds": 196697}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "j1"}} {"timestamp": {"seconds": 1584610193, "microseconds": 196761}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "j1"}} {"timestamp": {"seconds": 1584610193, "microseconds": 196863}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j1", "len": 1053687808, "offset": 1053687808, "speed": 0, "type": "mirror"}} {"timestamp": {"seconds": 1584610193, "microseconds": 196912}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j1"}} {"timestamp": {"seconds": 1584610193, "microseconds": 196953}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j1"}} 12. Continue migration in src {"execute":"migrate-continue","arguments":{"state":"pre-switchover"}} {"return": {}} 13. Check dst guest status (qemu)info status VM status: running 14. Check file's md5sum value in dst guest (guest)# md5sum t #cat sum1 After step14, values should be the same. When test with qemu-kvm-4.2.0-15.module+el8.2.0+6029+618ef2ec, all test steps can executed successfully. But when test with qemu-kvm-4.2.0-13.module+el8.2.0+5898+fb4bceae, test blocked in step7 after live snapshot with error info: {"execute":"blockdev-snapshot","arguments":{"node":"mirror","overlay":"mirror_sn"}} {"error": {"class": "GenericError", "desc": "The overlay is already in use"}} It seems that it have different error info with that reported in bug. So, not sure, if the reproduce step is correct, please help to check again. Thanks, Aliang (In reply to aihua liang from comment #11) Note that the "live storage migration" is a oVirt term for just using blockdev-mirror to copy the image to some new destination, so in fact no migration of the VM is involved, thus after step 7 you should use blockjob-complete/job-complete to switch over to the new images and that's it. > When test with qemu-kvm-4.2.0-15.module+el8.2.0+6029+618ef2ec, all test > steps can executed successfully. > But when test with qemu-kvm-4.2.0-13.module+el8.2.0+5898+fb4bceae, test > blocked in step7 after live snapshot with error info: > > {"execute":"blockdev-snapshot","arguments":{"node":"mirror","overlay": > "mirror_sn"}} > {"error": {"class": "GenericError", "desc": "The overlay is already in use"}} Yes, this is exactly expected. The qemu patches are actually allowing this specific operation which is then used by libvirt to do the appropriate steps so this is the exact message which I've seen after the required libvirt changes. I'm sorry I didn't update the bug, we were discussing the design issues upstream and I forgot then. Test on qemu-kvm-4.2.0-15.module+el8.2.0+6029+618ef2ec, no this issue anymore. Test Steps: 1. Start guest with qemu cmds: /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -sandbox on \ -machine q35 \ -nodefaults \ -device VGA,bus=pcie.0,addr=0x1 \ -m 14336 \ -smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2 \ -cpu 'EPYC',+kvm_pv_unhalt \ -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20200203-033416-61dmcn92,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20200203-033416-61dmcn92,server,nowait \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=idy8YPXp \ -object iothread,id=iothread0 \ -chardev socket,path=/var/tmp/serial-serial0-20200203-033416-61dmcn92,server,nowait,id=chardev_serial0 \ -device isa-serial,id=serial0,chardev=chardev_serial0 \ -chardev socket,id=seabioslog_id_20200203-033416-61dmcn92,path=/var/tmp/seabios-20200203-033416-61dmcn92,server,nowait \ -device isa-debugcon,chardev=seabioslog_id_20200203-033416-61dmcn92,iobase=0x402 \ -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \ -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \ -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \ -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/rhel820-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \ -device virtio-blk-pci,id=image1,drive=drive_image1,write-cache=on,bus=pcie.0-root-port-3,iothread=iothread0 \ -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \ -blockdev node-name=file_data1,driver=file,aio=threads,filename=/home/data.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_data1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_data1 \ -device virtio-blk-pci,id=data1,drive=drive_data1,write-cache=on,bus=pcie.0-root-port-6 \ -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \ -device virtio-net-pci,mac=9a:6c:ca:b7:36:85,id=idz4QyVp,netdev=idNnpx5D,bus=pcie.0-root-port-4,addr=0x0 \ -netdev tap,id=idNnpx5D,vhost=on \ -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/linux/RHEL8.2.0-BaseOS-x86_64.iso,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \ -device ide-cd,id=cd1,drive=drive_cd1,write-cache=on \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -vnc :0 \ -rtc base=utc,clock=host,driftfix=slew \ -boot menu=off,order=cdn,once=c,strict=off \ -enable-kvm \ -device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \ -monitor stdio \ -qmp tcp:0:3000,server,nowait \ 2. Create snapshot sn1 {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn1','size':21474836480},'job-id':'job1'}} {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/root/sn1'}} {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':21474836480},'job-id':'job2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1'}} {'execute':'job-dismiss','arguments':{'id':'job1'}} {'execute':'job-dismiss','arguments':{'id':'job2'}} {"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay":"sn1"}} 3. DD file in guest, record its md5sum value (guest)# dd if=/dev/urandom of=t bs=1M count=1000 #md5sum t > sum1 4. Create snapshot chain in dst #qemu-img create -f qcow2 mirror.qcow2 20G #qemu-img create -f qcow2 mirror_sn.qcow2 -b mirror.qcow2 20G 5. Add target node with backing:null and do block mirror with sync "top" {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_mirror_sn','filename':'/home/mirror_sn.qcow2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'mirror_sn','file':'drive_mirror_sn','backing':null}} {"execute": "blockdev-mirror", "arguments": {"sync": "top", "device": "sn1","target": "mirror_sn", "job-id": "j1"}} {"timestamp": {"seconds": 1584584034, "microseconds": 385957}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}} {"timestamp": {"seconds": 1584584034, "microseconds": 386063}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}} {"return": {}} {"timestamp": {"seconds": 1584584035, "microseconds": 291865}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "j1"}} {"timestamp": {"seconds": 1584584035, "microseconds": 291993}, "event": "BLOCK_JOB_READY", "data": {"device": "j1", "len": 1051852800, "offset": 1051852800, "speed": 0, "type": "mirror"}} 6. qemu-img convert from src to dst. #qemu-img convert -f qcow2 -O qcow2 /home/kvm_autotest_root/images/rhel820-64-virtio-scsi.qcow2 /home/mirror.qcow2 7. Do snapshot from mirror to mirror_sn {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_mirror','filename':'/home/mirror.qcow2'}} {"return": {}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'mirror','file':'drive_mirror'}} {"return": {}} {"execute":"blockdev-snapshot","arguments":{"node":"mirror","overlay":"mirror_sn"}} 8. Complete the block job. { "execute": "block-job-complete", "arguments": { "device": "j1" } } {"return": {}} {"timestamp": {"seconds": 1584674440, "microseconds": 292177}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "j1"}} {"timestamp": {"seconds": 1584674440, "microseconds": 292482}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "j1"}} {"timestamp": {"seconds": 1584674440, "microseconds": 292678}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j1", "len": 1056702464, "offset": 1056702464, "speed": 0, "type": "mirror"}} {"timestamp": {"seconds": 1584674440, "microseconds": 292741}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j1"}} {"timestamp": {"seconds": 1584674440, "microseconds": 292783}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j1"}} 9. Check block info. (qemu)info block info block mirror_sn: json:{"backing": {"driver": "qcow2", "file": {"driver": "file", "filename": "/home/mirror.qcow2"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/home/mirror_sn.qcow2"}} (qcow2) Attached to: /machine/peripheral/image1/virtio-backend Cache mode: writeback Backing file: /home/mirror.qcow2 (chain depth: 1) drive_data1: /home/data.qcow2 (qcow2) Attached to: /machine/peripheral/data1/virtio-backend Cache mode: writeback, direct drive_cd1: /home/kvm_autotest_root/iso/linux/RHEL8.2.0-BaseOS-x86_64.iso (raw, read-only) Attached to: cd1 Removable device: not locked, tray closed Cache mode: writeback, direct After step9, VM has worked on the latest mirror disk. So, set bug's status to "Verified". Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2017 |