Bug 1707303 - Error returned when attempting incremental live backup with an in-use bitmap is misleading
Summary: Error returned when attempting incremental live backup with an in-use bitmap ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: ---
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.1
Assignee: John Snow
QA Contact: aihua liang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-07 08:52 UTC by aihua liang
Modified: 2019-11-06 07:15 UTC (History)
6 users (show)

Fixed In Version: qemu-kvm-4.1.0-1.module+el8.1.0+3966+4a23dca1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-06 07:14:47 UTC
Type: Bug
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3723 None None None 2019-11-06 07:15:56 UTC

Description aihua liang 2019-05-07 08:52:23 UTC
Description of problem:
 Error returned qmp info when do incremental live backup with an in-use bitmap

Version-Release number of selected component (if applicable):
 qemu-kvm-rhev version:qemu-kvm-rhev-2.12.0-27.el7.x86_64
 kernel version:3.10.0-1041.el7.x86_64

How reproducible:
 100%

Steps to Reproduce:
1.Start guest with qemu cmds:
   /usr/libexec/qemu-kvm \
    -S \
    -name 'avocado-vt-vm1' \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190123-032240-rOoB4cgD,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190123-032240-rOoB4cgD,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=id8Ec4Bn  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20190123-032240-rOoB4cgD,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20190123-032240-rOoB4cgD,path=/var/tmp/seabios-20190123-032240-rOoB4cgD,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190123-032240-rOoB4cgD,iobase=0x402 \
    -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \
    -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pci.0 \
    -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pci.0 \
    -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pci.0 \
    -object iothread,id=iothread0 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/rhel77-64-virtio.qcow2 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bus=pci.0,iothread=iothread0 \
    -device virtio-net-pci,mac=9a:39:3a:3b:3c:3d,id=id1JNQsL,vectors=4,netdev=idVpZZ6A,bus=pci.0  \
    -netdev tap,id=idVpZZ6A,vhost=on \
    -m 4096  \
    -smp 2,maxcpus=2,cores=1,threads=1,sockets=2  \
    -cpu host,+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
    -qmp tcp:0:3000,server,nowait \

2.Add a bitmap then resume vm
  { "execute": "block-dirty-bitmap-add", "arguments": {"node": "drive_image1", "name": "bitmap0"}}
  (qemu)c

3.Do incremental live backup to "inc.img" with bitmap0 and speed 10
  { "execute": "drive-backup", "arguments": { "device": "drive_image1","target": "inc.img", "sync": "incremental","format": "qcow2","speed":10,"bitmap":"bitmap0"}}
  {"timestamp": {"seconds": 1557218012, "microseconds": 130998}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive_image1"}}
{"timestamp": {"seconds": 1557218012, "microseconds": 131053}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_image1"}}
{"return": {}}

4.Do incremental live backup to "inc1.img" with bitmap0
  { "execute": "drive-backup", "arguments": { "device": "drive_image1","target": "inc1.img", "sync": "incremental","format": "qcow2","bitmap":"bitmap0"}}
{"error": {"class": "GenericError", "desc": "Need a root block node"}}

Actual results:
  The error not indicate that bitmap0 is in use

Expected results:
  Error info should be something like "bitmap0" is in-use.

Additional info:
  Test with -blockdev, error info like this:
   { "execute": "blockdev-backup", "arguments": { "device": "drive_image1", "sync":"incremental","target":"sn1","bitmap":"bitmap0","job-id":"j1","speed":10}}
    {"timestamp": {"seconds": 1557217809, "microseconds": 475196}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1557217809, "microseconds": 475248}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}

  { "execute": "blockdev-backup", "arguments": { "device": "drive_image1", "sync":"incremental","target":"sn2","bitmap":"bitmap0","job-id":"j2"}}
  {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap0' is currently in use by another operation and cannot be used"}}

Comment 2 Ademar Reis 2019-05-09 21:14:50 UTC
Take

Comment 3 Ademar Reis 2019-05-09 21:15:46 UTC
(In reply to Ademar Reis from comment #2)
> Take

I pressed the wrong button, please ignore that comment :-)

Comment 4 John Snow 2019-05-10 16:57:17 UTC
OK, so this is the error message being confusing. I'm adjusting the bug title to reflect that and I will see if this can't be improved quickly...

The problem, I think, is that when you use drive-backup, the job creates a new block backend above the root node. When you try to run a second job, it finds this other block backend instead, and assumes that the node you specified isn't a "root node", and errors out before we even find the bitmap.

I believe this has come up before, but I think there was no resolution. I'll try to press forward here.

Thanks!

Comment 5 John Snow 2019-05-10 21:53:04 UTC
proposed patch upstream @ https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg02645.html

Comment 6 Danilo Cesar Lemes de Paula 2019-06-04 22:51:03 UTC
Moving to POST as patch was posted by overseer was behaving badly.

Comment 7 Danilo Cesar Lemes de Paula 2019-08-15 13:51:44 UTC
Fixed in qemu-kvm-4.1.0-1.module+el8.1.0+3966+4a23dca1

Comment 9 aihua liang 2019-08-16 06:26:51 UTC
Verified on qemu-kvm-4.1.0-1.module+el8.1.0+3966+4a23dca1.x86_64, the problem has been resolved, set bug's status to "Verified".

Test Steps:
 1.Start guest with qemu cmds:
   /usr/libexec/qemu-kvm \
    -S \
    -name 'avocado-vt-vm1' \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190123-032240-rOoB4cgD,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190123-032240-rOoB4cgD,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=id8Ec4Bn  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20190123-032240-rOoB4cgD,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20190123-032240-rOoB4cgD,path=/var/tmp/seabios-20190123-032240-rOoB4cgD,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190123-032240-rOoB4cgD,iobase=0x402 \
    -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \
    -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pci.0 \
    -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pci.0 \
    -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pci.0 \
    -object iothread,id=iothread0 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/rhel77-64-virtio.qcow2 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bus=pci.0,iothread=iothread0 \
    -device virtio-net-pci,mac=9a:39:3a:3b:3c:3d,id=id1JNQsL,vectors=4,netdev=idVpZZ6A,bus=pci.0  \
    -netdev tap,id=idVpZZ6A,vhost=on \
    -m 4096  \
    -smp 2,maxcpus=2,cores=1,threads=1,sockets=2  \
    -cpu host,+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
    -qmp tcp:0:3000,server,nowait \

2.Add a bitmap then resume vm
  { "execute": "block-dirty-bitmap-add", "arguments": {"node": "drive_image1", "name": "bitmap0"}}
  (qemu)c

3.Do incremental live backup to "inc.img" with bitmap0 and speed 10
  { "execute": "drive-backup", "arguments": { "device": "drive_image1","target": "inc.img", "sync": "incremental","format": "qcow2","speed":10,"bitmap":"bitmap0"}}
  {"timestamp": {"seconds": 1557218012, "microseconds": 130998}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive_image1"}}
{"timestamp": {"seconds": 1557218012, "microseconds": 131053}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_image1"}}
{"return": {}}

4.Do incremental live backup to "inc1.img" with bitmap0
  { "execute": "drive-backup", "arguments": { "device": "drive_image1","target": "inc1.img", "sync": "incremental","format": "qcow2","bitmap":"bitmap0"}}
  {"error": {"class": "GenericError", "desc": "Node 'drive_image1' is busy: block device is in use by block job: backup"}}

Comment 11 errata-xmlrpc 2019-11-06 07:14:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3723


Note You need to log in before you can comment on or make changes to this bug.