Bug 1707303
Summary: | Error returned when attempting incremental live backup with an in-use bitmap is misleading | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | aihua liang <aliang> |
Component: | qemu-kvm | Assignee: | John Snow <jsnow> |
Status: | CLOSED ERRATA | QA Contact: | aihua liang <aliang> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | --- | CC: | coli, ddepaula, juzhang, ngu, qzhang, virt-maint |
Target Milestone: | rc | ||
Target Release: | 8.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | qemu-kvm-4.1.0-1.module+el8.1.0+3966+4a23dca1 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-11-06 07:14:47 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
aihua liang
2019-05-07 08:52:23 UTC
Take (In reply to Ademar Reis from comment #2) > Take I pressed the wrong button, please ignore that comment :-) OK, so this is the error message being confusing. I'm adjusting the bug title to reflect that and I will see if this can't be improved quickly... The problem, I think, is that when you use drive-backup, the job creates a new block backend above the root node. When you try to run a second job, it finds this other block backend instead, and assumes that the node you specified isn't a "root node", and errors out before we even find the bitmap. I believe this has come up before, but I think there was no resolution. I'll try to press forward here. Thanks! proposed patch upstream @ https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg02645.html Moving to POST as patch was posted by overseer was behaving badly. Fixed in qemu-kvm-4.1.0-1.module+el8.1.0+3966+4a23dca1 Verified on qemu-kvm-4.1.0-1.module+el8.1.0+3966+4a23dca1.x86_64, the problem has been resolved, set bug's status to "Verified". Test Steps: 1.Start guest with qemu cmds: /usr/libexec/qemu-kvm \ -S \ -name 'avocado-vt-vm1' \ -machine pc \ -nodefaults \ -device VGA,bus=pci.0,addr=0x2 \ -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190123-032240-rOoB4cgD,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190123-032240-rOoB4cgD,server,nowait \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=id8Ec4Bn \ -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20190123-032240-rOoB4cgD,server,nowait \ -device isa-serial,chardev=serial_id_serial0 \ -chardev socket,id=seabioslog_id_20190123-032240-rOoB4cgD,path=/var/tmp/seabios-20190123-032240-rOoB4cgD,server,nowait \ -device isa-debugcon,chardev=seabioslog_id_20190123-032240-rOoB4cgD,iobase=0x402 \ -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \ -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pci.0 \ -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pci.0 \ -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pci.0 \ -object iothread,id=iothread0 \ -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/rhel77-64-virtio.qcow2 \ -device virtio-blk-pci,id=image1,drive=drive_image1,bus=pci.0,iothread=iothread0 \ -device virtio-net-pci,mac=9a:39:3a:3b:3c:3d,id=id1JNQsL,vectors=4,netdev=idVpZZ6A,bus=pci.0 \ -netdev tap,id=idVpZZ6A,vhost=on \ -m 4096 \ -smp 2,maxcpus=2,cores=1,threads=1,sockets=2 \ -cpu host,+kvm_pv_unhalt \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -vnc :0 \ -rtc base=utc,clock=host,driftfix=slew \ -boot order=cdn,once=c,menu=off,strict=off \ -enable-kvm \ -monitor stdio \ -qmp tcp:0:3000,server,nowait \ 2.Add a bitmap then resume vm { "execute": "block-dirty-bitmap-add", "arguments": {"node": "drive_image1", "name": "bitmap0"}} (qemu)c 3.Do incremental live backup to "inc.img" with bitmap0 and speed 10 { "execute": "drive-backup", "arguments": { "device": "drive_image1","target": "inc.img", "sync": "incremental","format": "qcow2","speed":10,"bitmap":"bitmap0"}} {"timestamp": {"seconds": 1557218012, "microseconds": 130998}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive_image1"}} {"timestamp": {"seconds": 1557218012, "microseconds": 131053}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_image1"}} {"return": {}} 4.Do incremental live backup to "inc1.img" with bitmap0 { "execute": "drive-backup", "arguments": { "device": "drive_image1","target": "inc1.img", "sync": "incremental","format": "qcow2","bitmap":"bitmap0"}} {"error": {"class": "GenericError", "desc": "Node 'drive_image1' is busy: block device is in use by block job: backup"}} Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3723 |