Bug 1763937
Summary: | Fail to do blockcommit with more than one snapshots | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | Han Han <hhan> | ||||||
Component: | qemu-kvm | Assignee: | Kevin Wolf <kwolf> | ||||||
qemu-kvm sub component: | General | QA Contact: | aihua liang <aliang> | ||||||
Status: | CLOSED ERRATA | Docs Contact: | |||||||
Severity: | unspecified | ||||||||
Priority: | high | CC: | coli, ddepaula, dyuan, kchamart, kwolf, libvirt-maint, lmen, smitterl, tburke, virt-maint, xuzhang, yisun | ||||||
Version: | 8.2 | ||||||||
Target Milestone: | rc | ||||||||
Target Release: | --- | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | qemu-kvm-4.2.0-4.module+el8.2.0+5220+e82621dc | Doc Type: | If docs needed, set a value | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | |||||||||
: | 1773925 (view as bug list) | Environment: | |||||||
Last Closed: | 2020-05-05 09:50:34 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | |||||||||
Bug Blocks: | 1773925, 1780705 | ||||||||
Attachments: |
|
BTW, the issue is not reproduced when only one external snapshot created before blockcommit. Created attachment 1627867 [details]
The libvirtd log of blockcommit from mid layer
Steps:
Start vm with -blockdev enabled.
Create snapshots on it:
➜ ~ virsh snapshot-create-as copy s1 --no-metadata --disk-only
Domain snapshot s1 created
➜ ~ virsh snapshot-create-as copy s2 --no-metadata --disk-only
Domain snapshot s2 created
➜ ~ virsh snapshot-create-as copy s3 --no-metadata --disk-only
Domain snapshot s3 created
The do shallow blockcommit from mid layer:
➜ ~ virsh blockcommit copy sda --top /var/lib/libvirt/images/copy.s1 --shallow --wait --verbose
error: internal error: unable to execute QEMU command 'block-commit': 'libvirt-2-format' is not in this backing file chain
➜ ~ virsh blockcommit copy sda --top /var/lib/libvirt/images/copy.s1 --shallow --wait --verbose
error: internal error: child reported (status=125): Requested operation is not valid: Setting different SELinux label on /var/lib/libvirt/images/copy.qcow2 which is already in use
This is a bug in qemu. When "backing: null" is used to attach the image which will become the overlay via blockdev-snapshot, qemu then during another snapshot applies that property and thus drops the backing chain. I've described it more in depth here: https://lists.gnu.org/archive/html/qemu-block/2019-10/msg01404.html Kevin posted patches for this bug: https://lists.gnu.org/archive/html/qemu-block/2019-11/msg00234.html Can reproduce this issue on qemu-kvm-4.1.0-14.module+el8.1.1+4632+a8269660.x86_64. Reproduce steps: 1. Start guest with qemu cmds: /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -machine pc \ -nodefaults \ -device VGA,bus=pci.0,addr=0x2 \ -m 7168 \ -smp 4,maxcpus=4,cores=2,threads=1,dies=1,sockets=2 \ -cpu 'Skylake-Client',+kvm_pv_unhalt \ -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20191113-221853-WI9PnBdR,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20191113-221853-WI9PnBdR,server,nowait \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=idUA0Y6Z \ -chardev socket,server,nowait,id=chardev_serial0,path=/var/tmp/serial-serial0-20191113-221853-WI9PnBdR \ -device isa-serial,id=serial0,chardev=chardev_serial0 \ -chardev socket,id=seabioslog_id_20191113-221853-WI9PnBdR,path=/var/tmp/seabios-20191113-221853-WI9PnBdR,server,nowait \ -device isa-debugcon,chardev=seabioslog_id_20191113-221853-WI9PnBdR,iobase=0x402 \ -device qemu-xhci,id=usb1,bus=pci.0,addr=0x3 \ -blockdev driver=file,filename=/home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2,node-name=file_node \ -blockdev driver=qcow2,file=file_node,node-name=drive_image1 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 \ -device scsi-hd,id=image1,drive=drive_image1 \ -device virtio-net-pci,mac=9a:9d:33:c3:3b:1a,id=idFXLnE7,netdev=idjZr0NP,bus=pci.0,addr=0x5 \ -netdev tap,id=idjZr0NP,vhost=on \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -vnc :0 \ -rtc base=utc,clock=host,driftfix=slew \ -boot order=cdn,once=c,menu=off,strict=off \ -enable-kvm \ -monitor stdio \ 2. Create two snapshot nodes with blockdev-create with backing-file and backing-fmt set, but add the node with backing:null 2.1 create sn1 {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn1','size':21474836480},'job-id':'job1'}} {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/root/sn1'}} {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':21474836480,'backing-file':'/home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2','backing-fmt':'qcow2'},'job-id':'job2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1','backing':null}} {'execute':'job-dismiss','arguments':{'id':'job1'}} {'execute':'job-dismiss','arguments':{'id':'job2'}} 2.2 check sn1 info online # qemu-img info sn1 -U image: sn1 file format: qcow2 virtual size: 20 GiB (21474836480 bytes) disk size: 256 KiB cluster_size: 65536 backing file: /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 backing file format: qcow2 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false 2.3 create sn2 {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn2','size':21474836480},'job-id':'job1'}} {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn2','filename':'/root/sn2'}} {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn2','size':21474836480,'backing-file':'/root/sn1','backing-fmt':'qcow2'},'job-id':'job2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn2','file':'drive_sn2','backing':null}} {'execute':'job-dismiss','arguments':{'id':'job1'}} {'execute':'job-dismiss','arguments':{'id':'job2'}} 2.4 check sn2 info online # qemu-img info sn2 -U image: sn2 file format: qcow2 virtual size: 20 GiB (21474836480 bytes) disk size: 256 KiB cluster_size: 65536 backing file: /root/sn1 backing file format: qcow2 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false 3. Do snapshot on drive_image1, check block info {"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay":"sn1"}} {"return": {}} (qemu) info block sn1: /root/sn1 (qcow2) Attached to: image1 Cache mode: writeback Backing file: /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 (chain depth: 1) 4. Do snapshot on sn1, then check block info: {"execute":"blockdev-snapshot","arguments":{"node":"sn1","overlay":"sn2"}} {"return": {}} (qemu) info block sn2: json:{"backing": {"backing": null, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn2"}} (qcow2) Attached to: image1 Cache mode: writeback Backing file: /root/sn1 (chain depth: 1) 5. Do live commit from sn2 to drive_image1 {'execute': 'block-commit', 'arguments': { 'device': 'sn2','job-id':'j3'}} {"timestamp": {"seconds": 1573713458, "microseconds": 106369}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j3"}} {"timestamp": {"seconds": 1573713458, "microseconds": 106407}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j3"}} {"return": {}} {"timestamp": {"seconds": 1573713458, "microseconds": 107090}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "j3"}} {"timestamp": {"seconds": 1573713458, "microseconds": 107115}, "event": "BLOCK_JOB_READY", "data": {"device": "j3", "len": 0, "offset": 0, "speed": 0, "type": "commit"}} 6. After commit job reach ready status, complete the job. { "execute": "block-job-complete", "arguments": { "device": "j3"}} {"return": {}} {"timestamp": {"seconds": 1573713484, "microseconds": 42008}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "j3"}} {"timestamp": {"seconds": 1573713484, "microseconds": 42037}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "j3"}} {"timestamp": {"seconds": 1573713484, "microseconds": 42143}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j3", "len": 0, "offset": 0, "speed": 0, "type": "commit"}} {"timestamp": {"seconds": 1573713484, "microseconds": 42191}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j3"}} {"timestamp": {"seconds": 1573713484, "microseconds": 42234}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j3"}} 7. Check block info: (qemu) info block sn1: json:{"backing": null, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}} (qcow2) Attached to: image1 Cache mode: writeback Backing file: /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 (chain depth: 0) Usually we test without backing-file setting, and it looks like as bellow: 1. Start guest with qemu cmds: /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -machine pc \ -nodefaults \ -device VGA,bus=pci.0,addr=0x2 \ -m 7168 \ -smp 4,maxcpus=4,cores=2,threads=1,dies=1,sockets=2 \ -cpu 'Skylake-Client',+kvm_pv_unhalt \ -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20191113-221853-WI9PnBdR,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20191113-221853-WI9PnBdR,server,nowait \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=idUA0Y6Z \ -chardev socket,server,nowait,id=chardev_serial0,path=/var/tmp/serial-serial0-20191113-221853-WI9PnBdR \ -device isa-serial,id=serial0,chardev=chardev_serial0 \ -chardev socket,id=seabioslog_id_20191113-221853-WI9PnBdR,path=/var/tmp/seabios-20191113-221853-WI9PnBdR,server,nowait \ -device isa-debugcon,chardev=seabioslog_id_20191113-221853-WI9PnBdR,iobase=0x402 \ -device qemu-xhci,id=usb1,bus=pci.0,addr=0x3 \ -blockdev driver=file,filename=/home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2,node-name=file_node \ -blockdev driver=qcow2,file=file_node,node-name=drive_image1 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 \ -device scsi-hd,id=image1,drive=drive_image1 \ -device virtio-net-pci,mac=9a:9d:33:c3:3b:1a,id=idFXLnE7,netdev=idjZr0NP,bus=pci.0,addr=0x5 \ -netdev tap,id=idjZr0NP,vhost=on \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -vnc :0 \ -rtc base=utc,clock=host,driftfix=slew \ -boot order=cdn,once=c,menu=off,strict=off \ -enable-kvm \ -monitor stdio \ 2. Create two snapshot nodes with blockdev-create without backing-file and backing-fmt set 2.1 create sn1 {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn1','size':21474836480},'job-id':'job1'}} {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/root/sn1'}} {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':21474836480},'job-id':'job2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1'}} {'execute':'job-dismiss','arguments':{'id':'job1'}} {'execute':'job-dismiss','arguments':{'id':'job2'}} 2.2 check sn1 info online # qemu-img info sn1 -U image: sn1 file format: qcow2 virtual size: 20 GiB (21474836480 bytes) disk size: 256 KiB cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false 2.3 create sn2 {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn2','size':21474836480},'job-id':'job1'}} {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn2','filename':'/root/sn2'}} {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn2','size':21474836480},'job-id':'job2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn2','file':'drive_sn2'}} {'execute':'job-dismiss','arguments':{'id':'job1'}} {'execute':'job-dismiss','arguments':{'id':'job2'}} 2.4 check sn2 info online # qemu-img info sn2 -U image: sn2 file format: qcow2 virtual size: 20 GiB (21474836480 bytes) disk size: 256 KiB cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false 3. Do snapshot on drive_image1, check block info {"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay":"sn1"}} {"return": {}} (qemu) info block sn1: json:{"backing": {"driver": "qcow2", "file": {"driver": "file", "filename": "/home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}} (qcow2) Attached to: image1 Cache mode: writeback Backing file: /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 (chain depth: 1) 4. Do snapshot on sn1, then check block info: {"execute":"blockdev-snapshot","arguments":{"node":"sn1","overlay":"sn2"}} {"return": {}} (qemu) info block sn2: json:{"backing": {"backing": {"driver": "qcow2", "file": {"driver": "file", "filename": "/home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn2"}} (qcow2) Attached to: image1 Cache mode: writeback Backing file: json:{"backing": {"driver": "qcow2", "file": {"driver": "file", "filename": "/home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}} (chain depth: 2) Attached to: image1 Cache mode: writeback Backing file: /root/sn1 (chain depth: 1) 5. Do live commit from sn2 to drive_image1 {'execute': 'block-commit', 'arguments': { 'device': 'sn2','job-id':'j3'}} {"timestamp": {"seconds": 1573714875, "microseconds": 513176}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j3"}} {"timestamp": {"seconds": 1573714875, "microseconds": 513239}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j3"}} {"return": {}} {"timestamp": {"seconds": 1573714875, "microseconds": 553975}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "j3"}} {"timestamp": {"seconds": 1573714875, "microseconds": 554023}, "event": "BLOCK_JOB_READY", "data": {"device": "j3", "len": 327680, "offset": 327680, "speed": 0, "type": "commit"}} 6. After commit job reach ready status, complete the job. { "execute": "block-job-complete", "arguments": { "device": "j3"}} {"return": {}} {"timestamp": {"seconds": 1573714895, "microseconds": 664418}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "j3"}} {"timestamp": {"seconds": 1573714895, "microseconds": 664463}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "j3"}} {"timestamp": {"seconds": 1573714895, "microseconds": 664568}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j3", "len": 524288, "offset": 524288, "speed": 0, "type": "commit"}} {"timestamp": {"seconds": 1573714895, "microseconds": 664632}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j3"}} {"timestamp": {"seconds": 1573714895, "microseconds": 664680}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j3"}} 7. Check block info: (qemu)info block drive_image1: /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 (qcow2) Attached to: image1 Cache mode: writeback From c#4 and c#5, we can see that when add block node with backing:null, the snapshot chain(base->sn1) is not built after cmd "{"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay":"sn1"}}". (In reply to aihua liang from comment #6) > From c#4 and c#5, we can see that when add block node with backing:null, the > snapshot chain(base->sn1) is not built after cmd > "{"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay": > "sn1"}}". To be precise, the snapshot link base <- sn1 is created during the first blockdev-snapshot, as 'info block' shows in step 3: > Backing file: /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 (chain depth: 1) However, it gets lost again during the second blockdev-snapshot, so that 'info block' in step 4 reports a chain size of 1 instead of 2: > Backing file: /root/sn1 (chain depth: 1) (In reply to Kevin Wolf from comment #7) > (In reply to aihua liang from comment #6) > > From c#4 and c#5, we can see that when add block node with backing:null, the > > snapshot chain(base->sn1) is not built after cmd > > "{"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay": > > "sn1"}}". > > To be precise, the snapshot link base <- sn1 is created during the first > blockdev-snapshot, as 'info block' shows in step 3: > > > Backing file: /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 (chain depth: 1) > > However, it gets lost again during the second blockdev-snapshot, so that > 'info block' in step 4 reports a chain size of 1 instead of 2: > > > Backing file: /root/sn1 (chain depth: 1) Yes, Kevin, see it. What's confusing me is that, the same cmd:"{"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay": "sn1"}}", but different output for step3 in comment 4 and comment 5. Output for step3 in commnet4 with backing:null setting: (qemu) info block sn1: /root/sn1 (qcow2) --> here not display backing info, just because i set backing:null? but in step4, after the second snapshot operation, the backing can displayed. I'm not sure which behavior is correct for backing:null. Attached to: image1 Cache mode: writeback Backing file: /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 (chain depth: 1) Output for step3 in commnet5 without backing:null setting: (qemu) info block sn1: json:{"backing": {"driver": "qcow2", "file": {"driver": "file", "filename": "/home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}} (qcow2) Attached to: image1 Cache mode: writeback Backing file: /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 (chain depth: 1) Yes, this looks wrong. It is likely an effect of the inconsistent state that this bug is about. Sorry, actually the behaviour in step 3 is correct. The bug only happens in step 4. (In reply to aihua liang from comment #8) > Output for step3 in commnet4 with backing:null setting: > (qemu) info block > sn1: /root/sn1 (qcow2) --> here not display backing info, just because i > set backing:null? but in step4, after the second snapshot operation, the > backing can displayed. I'm not sure which behavior is correct for > backing:null. > Attached to: image1 > Cache mode: writeback > Backing file: > /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 (chain depth: 1) In comment 4, we have the backing file name stored in the image file /root/sn1. At runtime it is overridden with 'backing': null, but the blockdev-snapshot in step 3 attaches the backing file again. So the state in the QEMU process is exactly as described in the qcow2 header of /root/sn1 and we don't need to use a json: protocol to override the setting. > Output for step3 in commnet5 without backing:null setting: > (qemu) info block > sn1: json:{"backing": {"driver": "qcow2", "file": {"driver": "file", > "filename": "/home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2"}}, > "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}} > (qcow2) > Attached to: image1 > Cache mode: writeback > Backing file: > /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 (chain depth: 1) This behaviour is correct, too. In the image file, there is no backing file name stored, but at runtime we attached a backing file. So this difference needs to be reflected as a json: filename. The bug is fixed in upstream commit ae0f57f0aa. (In reply to Kevin Wolf from comment #10) > Sorry, actually the behaviour in step 3 is correct. The bug only happens in > step 4. > > (In reply to aihua liang from comment #8) > > Output for step3 in commnet4 with backing:null setting: > > (qemu) info block > > sn1: /root/sn1 (qcow2) --> here not display backing info, just because i > > set backing:null? but in step4, after the second snapshot operation, the > > backing can displayed. I'm not sure which behavior is correct for > > backing:null. > > Attached to: image1 > > Cache mode: writeback > > Backing file: > > /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 (chain depth: 1) > > In comment 4, we have the backing file name stored in the image file > /root/sn1. At runtime it is overridden with 'backing': null, but the > blockdev-snapshot in step 3 attaches the backing file again. So the state in > the QEMU process is exactly as described in the qcow2 header of /root/sn1 > and we don't need to use a json: protocol to override the setting. So if {"execute":"blockdev-snapshot","arguments":{"node":"sn1","overlay":"sn2"}} works ok, info block should be: (qemu) info block sn2: /root/sn2 Attached to: image1 Cache mode: writeback Backing file: /root/sn1 (chain depth: 2) ? In summary: If set backing-file when create node and then do live snapshot with the node as "overlay", the state in QEMU process will be exactly as described in the qcow2 header of the node. Otherwise, it will be a json: protocol setting ?? > > > Output for step3 in commnet5 without backing:null setting: > > (qemu) info block > > sn1: json:{"backing": {"driver": "qcow2", "file": {"driver": "file", > > "filename": "/home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2"}}, > > "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}} > > (qcow2) > > Attached to: image1 > > Cache mode: writeback > > Backing file: > > /home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2 (chain depth: 1) > > This behaviour is correct, too. In the image file, there is no backing file > name stored, but at runtime we attached a backing file. So this difference > needs to be reflected as a json: filename. (In reply to aihua liang from comment #12) > In summary: > If set backing-file when create node and then do live snapshot with the node as "overlay", > the state in QEMU process will be exactly as described in the qcow2 header of the node. > Otherwise, it will be a json: protocol setting ?? Yes, this is the theory (as long as the filename during creation and at runtime match exactly, i.e. not different relative paths or something). Re-assigned to make sure the correct fix version? We used rc1. This BZ state looks right tough: If qemu-4.2-rc2 contains the fix we should get it in the next build. rc4 rebase was proposed yesterday and we should build it pretty soon (when the problems with https://projects.engineering.redhat.com/browse/BST-947 get fixed). Test on qemu-kvm-4.2.0-4.module+el8.2.0+5220+e82621dc, the issue has been resolved, set bug's status to "Verified". Test Steps: 1. Start guest with qemu cmds: /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -machine pc \ -nodefaults \ -device VGA,bus=pci.0,addr=0x2 \ -m 7168 \ -smp 4,maxcpus=4,cores=2,threads=1,dies=1,sockets=2 \ -cpu 'Skylake-Client',+kvm_pv_unhalt \ -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20191113-221853-WI9PnBdR,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20191113-221853-WI9PnBdR,server,nowait \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=idUA0Y6Z \ -chardev socket,server,nowait,id=chardev_serial0,path=/var/tmp/serial-serial0-20191113-221853-WI9PnBdR \ -device isa-serial,id=serial0,chardev=chardev_serial0 \ -chardev socket,id=seabioslog_id_20191113-221853-WI9PnBdR,path=/var/tmp/seabios-20191113-221853-WI9PnBdR,server,nowait \ -device isa-debugcon,chardev=seabioslog_id_20191113-221853-WI9PnBdR,iobase=0x402 \ -device qemu-xhci,id=usb1,bus=pci.0,addr=0x3 \ -blockdev driver=file,filename=/home/kvm_autotest_root/images/rhel811-64-virtio-scsi.qcow2,node-name=file_node \ -blockdev driver=qcow2,file=file_node,node-name=drive_image1 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 \ -device scsi-hd,id=image1,drive=drive_image1 \ -device virtio-net-pci,mac=9a:9d:33:c3:3b:1a,id=idFXLnE7,netdev=idjZr0NP,bus=pci.0,addr=0x5 \ -netdev tap,id=idjZr0NP,vhost=on \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -vnc :0 \ -rtc base=utc,clock=host,driftfix=slew \ -boot order=cdn,once=c,menu=off,strict=off \ -enable-kvm \ -monitor stdio \ 2. Create snapshot target sn1 {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn1','size':21474836480},'job-id':'job1'}} {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/root/sn1'}} {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':21474836480},'job-id':'job2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1'}} {'execute':'job-dismiss','arguments':{'id':'job1'}} {'execute':'job-dismiss','arguments':{'id':'job2'}} 3. Do snapshot, and check block info after that. {"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay":"sn1"}} (qemu) info block sn1: /root/sn1 (qcow2) Attached to: image1 Cache mode: writeback Backing file: /home/kvm_autotest_root/images/rhel820-64-virtio-scsi.qcow2 (chain depth: 1) 4. Create snapshot target sn2 {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn2','size':21474836480},'job-id':'job1'}} {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn2','filename':'/root/sn2'}} {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn2','size':21474836480},'job-id':'job2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn2','file':'drive_sn2'}} {'execute':'job-dismiss','arguments':{'id':'job1'}} {'execute':'job-dismiss','arguments':{'id':'job2'}} 5. Do snapshot and check block info {"execute":"blockdev-snapshot","arguments":{"node":"sn1","overlay":"sn2"}} (qemu) info block sn2: /root/sn2 (qcow2) Attached to: image1 Cache mode: writeback Backing file: /root/sn1 (chain depth: 2) 6. Do live commit from sn2 to drive_image1 {'execute': 'block-commit', 'arguments': { 'device': 'sn2','job-id':'j3'}} {"timestamp": {"seconds": 1573714875, "microseconds": 513176}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j3"}} {"timestamp": {"seconds": 1573714875, "microseconds": 513239}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j3"}} {"return": {}} {"timestamp": {"seconds": 1573714875, "microseconds": 553975}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "j3"}} {"timestamp": {"seconds": 1573714875, "microseconds": 554023}, "event": "BLOCK_JOB_READY", "data": {"device": "j3", "len": 327680, "offset": 327680, "speed": 0, "type": "commit"}} 7. After commit job reach ready status, complete the job. { "execute": "block-job-complete", "arguments": { "device": "j3"}} {"return": {}} {"timestamp": {"seconds": 1573714895, "microseconds": 664418}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "j3"}} {"timestamp": {"seconds": 1573714895, "microseconds": 664463}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "j3"}} {"timestamp": {"seconds": 1573714895, "microseconds": 664568}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j3", "len": 524288, "offset": 524288, "speed": 0, "type": "commit"}} {"timestamp": {"seconds": 1573714895, "microseconds": 664632}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j3"}} {"timestamp": {"seconds": 1573714895, "microseconds": 664680}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j3"}} 8. Check block info: (qemu) info block drive_image1: /home/kvm_autotest_root/images/rhel820-64-virtio-scsi.qcow2 (qcow2) Attached to: image1 Cache mode: writeback QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2017 |
Created attachment 1627863 [details] vm xml, libvirtd log, reproducing script Description of problem: As subject Version-Release number of selected component (if applicable): libvirt v5.8.0-304-g2cff65e4c6 qemu-kvm-4.1.0-13.module+el8.1.0+4313+ef76ec61.x86_64 How reproducible: 100% Steps to Reproduce: 1. Start an VM with -blockdev enabled 2. Create 2 external snapshots, then do blockcommit: virsh snapshot-create-as $VM s1 --no-metadata --disk-only virsh snapshot-create-as $VM s2 --no-metadata --disk-only virsh blockcommit $VM sda --active --wait --verbose Actual results: error: internal error: unable to execute QEMU command 'block-commit': 'libvirt-1-format' is not in this backing file chain Expected results: No error on blockcommit Additional info: The issue is not reproduced when -blockdev is disabled. See the vm xml, libvirtd log(FILTER 2:util 1:qemu 1:security), reproducing script in attachment.