Bug 1781310 - Removal of persistent dirty bitmaps may cause segfault/crash
Summary: Removal of persistent dirty bitmaps may cause segfault/crash
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.2
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Eric Blake
QA Contact: aihua liang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-12-09 18:11 UTC by John Snow
Modified: 2020-05-05 09:52 UTC (History)
5 users (show)

Fixed In Version: qemu-kvm-4.2.0-4.module+el8.2.0+5220+e82621dc
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-05 09:52:16 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)

Description John Snow 2019-12-09 18:11:21 UTC
Description of problem:

Removal of dirty bitmaps under certain circumstances can result in a segfault. The bug was introduced upstream in version 4.2.1.

See https://lists.gnu.org/archive/html/qemu-devel/2019-12/msg01091.html for more details.

This bug was reported by Vladimir Sementsov-Ogievskiy of Virtuozzo.

He writes:

Bug triggers when we are removing persistent bitmap that is not stored yet in the image AND at least one (another) bitmap already stored in the image. So, something like:

1. create persistent bitmap A
2. shutdown vm  (bitmap A is synced)
3. start vm
4. create persistent bitmap B
5. remove bitmap B - it fails (and crashes if in transaction)


This will be fixed upstream in version 4.2-rc5.

Comment 2 John Snow 2019-12-13 22:25:05 UTC
This should now be fixed in rhel8/rhel-av-8.2.0. (which included the RC5 fixes). I think I ought to leave this as POST until we have a build that contains the fix, though?

Comment 4 aihua liang 2019-12-19 08:33:38 UTC
Test on qemu-kvm-4.2.0-4.module+el8.2.0+5220+e82621dc, the problem has been resolved, set bug's status to "Verified".

Test steps:
 1.Start guest with qemu cmds:
     /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1 \
    -m 7168  \
    -smp 8,maxcpus=8,cores=4,threads=1,dies=1,sockets=2  \
    -cpu 'Skylake-Server',+kvm_pv_unhalt  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20191219-023307-NqF6EWAc,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20191219-023307-NqF6EWAc,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idcKDQSW \
    -chardev socket,id=chardev_serial0,path=/var/tmp/serial-serial0-20191219-023307-NqF6EWAc,server,nowait \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20191219-023307-NqF6EWAc,path=/var/tmp/seabios-20191219-023307-NqF6EWAc,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20191219-023307-NqF6EWAc,iobase=0x402 \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0,multifunction=on \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x2.0x1,bus=pcie.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-3,addr=0x0,iothread=iothread0 \
    -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/rhel820-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x2.0x2 \
    -blockdev node-name=file_data1,driver=file,aio=threads,filename=/home/data.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device virtio-blk-pci,id=data1,drive=drive_data1,write-cache=on \
    -device pcie-root-port,id=pcie.0-root-port-5,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:bb:1a:62:67:56,id=idI4GPt2,netdev=idGpfGpk,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idGpfGpk,vhost=on \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -monitor stdio \
    -qmp tcp:0:3000,server,nowait \

  2. Add persistent bitmap to data disk
      { "execute": "block-dirty-bitmap-add", "arguments": {"node": "drive_data1", "name": "bitmap0","persistent":true}}
 
  3. Quit vm
      (qemu)quit

  4. Re-start vm with qemu cmds in step1
  
  5. Add persistent bitmap "bitmap1" to data disk
     { "execute": "block-dirty-bitmap-add", "arguments": {"node": "drive_data1", "name": "bitmap1","persistent":true}}

  6. Remove bitmaps in transaction mode
      { "execute": "transaction", "arguments": { "actions": [ {"type": "block-dirty-bitmap-remove","data":{"node":"drive_data1","name":"bitmap0"}},{"type": "block-dirty-bitmap-remove","data":{"node":"drive_data1","name":"bitmap1"}}]}}
{"return": {}}
 
   After step6, bitmap remove executed successfully.

Comment 5 Ademar Reis 2020-02-05 23:10:21 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 7 errata-xmlrpc 2020-05-05 09:52:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017


Note You need to log in before you can comment on or make changes to this bug.