Bug 1459801
Summary: | nbd/server.c:nbd_receive_request():L706: read failed when do migration_cancel | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | Suqin Huang <shuang> |
Component: | qemu-kvm | Assignee: | Eric Blake <eblake> |
qemu-kvm sub component: | NBD | QA Contact: | zixchen |
Status: | CLOSED WONTFIX | Docs Contact: | |
Severity: | medium | ||
Priority: | medium | CC: | afrosi, aliang, chayang, coli, eblake, juzhang, knoel, rbalakri, virt-maint, xuwei |
Version: | --- | Keywords: | Triaged |
Target Milestone: | rc | ||
Target Release: | 8.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-01-15 07:37:39 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1458725, 1473046 |
Description
Suqin Huang
2017-06-08 08:22:21 UTC
the error happen before the migration operation. boot up src image and boot up dst image, then the error show I can reproduce with qemu-kvm-rhev-2.8.0-6.el7.x86_64 do migration cancel in src guest: src: (qemu) migrate -d tcp:0:5200 (qemu) migrate_cancel result: dst guest: (qemu) qemu-kvm: Unknown combination of migration flags: 0 qemu-kvm: error while loading state section id 3(ram) qemu-kvm: load of migration failed: Invalid argument server: # qemu-nbd -f qcow2 rhel74-64-virtio-scsi.qcow2 --share=4 -t -p 9000 nbd/server.c:nbd_receive_request():L710: read failed not the same as Bug 1458725, as it just happen when do migrate_cancel. migration canbe do successfully if didn't cancel it. I wonder if NBD drives should be a migration blocker? After all, having two separate clients (source and destination of the qemu migration) both trying to connect as read-write clients to the same NBD server may not always work right. The NBD protocol permits servers to allow multiple clients, but does not require it to work well, and I don't know if qemu-nbd as a server is designed to handle this sort of scenario. Reproduced this bug in rhel8. Tested with: qemu-kvm-3.1.0-20.module+el8+2904+e658c755 kernel-4.18.0-80.el8 Steps: Server: # qemu-nbd -f qcow2 sn.qcow2 -t -p 9000 --share=2 Client: 1. Boot guest in source # /usr/libexec/qemu-kvm \ -name 'guest' \ -machine q35 \ -nodefaults \ -vga qxl \ -vnc :1 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,addr=0x3 \ -blockdev driver=nbd,cache.direct=on,cache.no-flush=off,server.host=localhost,server.port=9000,server.type=inet,node-name=my_file \ -blockdev driver=raw,node-name=my,file=my_file \ -device scsi-hd,drive=my \ -monitor stdio \ -m 8192 \ -smp 8 \ -device virtio-net-pci,mac=9a:b5:b6:b1:b2:b3,id=idMmq1jH,vectors=4,netdev=idxgXAlm,bus=pcie.0,addr=0x9 \ -netdev tap,id=idxgXAlm \ -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/timao/monitor-qmpmonitor1-20180220-094308-h9I6hRsI,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -device pcie-root-port,id=pcie.0-root-port-8,slot=8,chassis=8,addr=0x8,bus=pcie.0 \ 2. Boot guest in target # /usr/libexec/qemu-kvm \ -name 'guest' \ -machine q35 \ -nodefaults \ -vga qxl \ -vnc :2 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,addr=0x3 \ -blockdev driver=nbd,cache.direct=on,cache.no-flush=off,server.host=localhost,server.port=9000,server.type=inet,node-name=my_file \ -blockdev driver=raw,node-name=my,file=my_file \ -device scsi-hd,drive=my \ -monitor stdio \ -m 8192 \ -smp 8 \ -device virtio-net-pci,mac=9a:b5:b6:b1:b2:b3,id=idMmq1jH,vectors=4,netdev=idxgXAlm,bus=pcie.0,addr=0x9 \ -netdev tap,id=idxgXAlm \ -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/timao/monitor-qmpmonitor1-20180220-094308-h9I6hRsI,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -device pcie-root-port,id=pcie.0-root-port-8,slot=8,chassis=8,addr=0x8,bus=pcie.0 \ -incoming tcp:0:5200 3. Migration and cancel in source (qemu) migrate -d tcp:0:5200 (qemu) migrate_cancel Result: Target: (qemu) qemu-kvm: check_section_footer: Read section footer failed: -5 qemu-kvm: load of migration failed: Invalid argument Server: # qemu-nbd -f qcow2 sn.qcow2 -t -p 9000 --share=2 Disconnect client, due to: Unexpected end-of-file before all bytes were read Disconnect client, due to: Unexpected end-of-file before all bytes were read Disconnect client, due to: Unexpected end-of-file before all bytes were read QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks Reproduced this issue with : qemu-kvm-5.1.0-10.module+el8.3.0+8254+568ca30d kernel-4.18.0-234 Same steps as Comment9: qemu-nbd -f qcow2 /var/lib/libvirt/images/f31.qcow2 -p 9000 -t --share=2 1- Boot source guest /usr/libexec/qemu-kvm \ -name 'guest' \ -machine q35 \ -nodefaults \ -vga qxl \ -vnc :1 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,addr=0x3 \ -blockdev driver=nbd,cache.direct=on,cache.no-flush=off,server.host=localhost,server.port=9000,server.type=inet,node-name=my_file \ -blockdev driver=raw,node-name=my,file=my_file \ -device scsi-hd,drive=my \ -monitor stdio \ -m 8192 \ -smp 8 \ -device virtio-net-pci,mac=9a:b5:b6:b1:b2:b3,id=idMmq1jH,vectors=4,netdev=idxgXAlm,bus=pcie.0,addr=0x9 \ -netdev tap,id=idxgXAlm \ -chardev socket,id=qmp_id_qmpmonitor1,path=/tmp/monitor,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -device pcie-root-port,id=pcie.0-root-port-8,slot=8,chassis=8,addr=0x8,bus=pcie.0 2- Boot dest guest /usr/libexec/qemu-kvm \ -name 'guest' \ -machine q35 \ -nodefaults \ -vga qxl \ -vnc :2 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,addr=0x3 \ -blockdev driver=nbd,cache.direct=on,cache.no-flush=off,server.host=localhost,server.port=9000,server.type=inet,node-name=my_file \ -blockdev driver=raw,node-name=my,file=my_file \ -device scsi-hd,drive=my \ -monitor stdio \ -m 8192 \ -smp 8 \ -device virtio-net-pci,mac=9a:b5:b6:b1:b2:b3,id=idMmq1jH,vectors=4,netdev=idxgXAlm,bus=pcie.0,addr=0x9 \ -netdev tap,id=idxgXAlm \ -chardev socket,id=qmp_id_qmpmonitor1,path=/tmp/monitor-target,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -device pcie-root-port,id=pcie.0-root-port-8,slot=8,chassis=8,addr=0x8,bus=pcie.0 \ -incoming tcp:0:5200 3- Migration and cancel in source (qemu) migrate -d tcp:0:5200 (qemu) migrate_cancel Result: In target: qemu-kvm: error while loading state section id 1(ram) qemu-kvm: load of migration failed: Invalid argument In server: qemu-nbd: Disconnect client, due to: Failed to read request: Unexpected end-of-file before all bytes were read qemu-nbd: Disconnect client, due to: Failed to read request: Unexpected end-of-file before all bytes were read qemu-nbd: Disconnect client, due to: Failed to read request: Unexpected end-of-file before all bytes were read After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. With qemu-kvm-5.2.0-1.module+el8.4.0+9091+650b220a.x86_64, it reproduce the bug, but libvirt doesn't support such option 'migrate_cancel' Hence, agree to closed this. |