Bug 1973829 - [incremental backup] qemu-kvm hangs when Rebooting the VM during full backup [rhel-8.4.0.z]
Summary: [incremental backup] qemu-kvm hangs when Rebooting the VM during full backup ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: ---
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: rc
: 8.5
Assignee: Sergio Lopez
QA Contact: aihua liang
URL:
Whiteboard:
Depends On: 1960137
Blocks: 1892681
TreeView+ depends on / blocked
 
Reported: 2021-06-18 19:40 UTC by RHEL Program Management Team
Modified: 2021-08-31 08:14 UTC (History)
9 users (show)

Fixed In Version: qemu-kvm-5.2.0-16.module+el8.4.0+11721+c8bbc1be.3
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1960137
Environment:
Last Closed: 2021-08-31 08:07:47 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:3340 0 None None None 2021-08-31 08:08:00 UTC

Comment 5 Yanan Fu 2021-07-08 16:01:28 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 6 aihua liang 2021-07-09 04:19:42 UTC
Test on qemu-kvm-5.2.0-16.module+el8.4.0+11721+c8bbc1be.3, the bug has been fixed.

Test Env:
  kernel:4.18.0-305.el8.x86_64
  qemu-kvm:qemu-kvm-5.2.0-16.module+el8.4.0+11721+c8bbc1be.3


Test Steps:
 1.Start guest with qemu cmds:
 /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35 \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \
    -chardev socket,server=on,path=/tmp/monitor-qmpmonitor1-20210512-234257-mOeaMK07,id=qmp_id_qmpmonitor1,wait=off  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,server=on,path=/tmp/monitor-catch_monitor-20210512-234257-mOeaMK07,id=qmp_id_catch_monitor,wait=off  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idXTMc9z \
    -chardev socket,server=on,path=/tmp/serial-serial0-20210512-234257-mOeaMK07,id=chardev_serial0,wait=off \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20210512-234257-mOeaMK07,path=/tmp/seabios-20210512-234257-mOeaMK07,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20210512-234257-mOeaMK07,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -object iothread,id=iothread1 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/rhel840-64-virtio.raw,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,write-cache=on,bus=pcie-root-port-2,addr=0x0,iothread=iothread1 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:bb:ed:35:8d:44,id=idSFYXRM,netdev=id47qtZ5,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=id47qtZ5,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \
    -qmp tcp:0:3000,server,nowait \

  2. Start nbd server
     {"execute":"nbd-server-start","arguments":{"addr":{"type":"inet","data":{"host":"10.73.114.14","port":"10809"}}}}

  3. Create scratch.img
      #qemu-img create -f qcow2 -b /home/kvm_autotest_root/images/rhel840-64-virtio.raw -F raw scratch.img

  4. Add scratch.img
     {"execute":"blockdev-add","arguments":{"driver":"file","filename":"/home/scratch.img","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}}
     {"execute":"blockdev-add","arguments":{"node-name":"tmp","read-only":false,"driver":"qcow2","file":"libvirt-3-storage","backing":"drive_image1"}}

  5. Start backup channel
     { "execute": "transaction", "arguments": { "actions": [ {"type": "blockdev-backup", "data": { "device": "drive_image1", "target": "tmp", "sync": "none", "job-id":"j1" } }, {"type": "block-dirty-bitmap-add", "data": { "node": "drive_image1", "name": "bitmap0" } } ] } }
{"timestamp": {"seconds": 1621303220, "microseconds": 251587}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1621303220, "microseconds": 251675}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}

  6. Expose the backup image
    {"execute":"nbd-server-add","arguments":{"device":"tmp"}}

  7. In nbd client, create backup image
    #qemu-img create -f qcow2 back1.img 20G
    Note:qemu-kvm version in nbd client: qemu-kvm-6.0.0-21.module+el8.5.0+11555+e0ab0d09
  
  8. Pull backup image from nbd client.
    #./copyif3.sh nbd://10.73.114.14:10809/tmp back1.img

  9. Repeate reboot in vm during step8's process.
    (guest)#reboot

Test Result:
  Pull backup successfully.


Set bug's status to "Verified"

Comment 7 Nir Soffer 2021-07-25 11:53:28 UTC
Segrio, what is the correct way to require this version?

Do we need to specify the entire version like this:

    qemu-kvm >= 5.2.0-16.module+el8.4.0+11721+c8bbc1be.3

Or we can use the release number?

    qemu-kvm >= 5.2.0-16

The last version is preferable, since we like to make this
requirement work on Centos Stream and other RHEL-like builds
that may not have the module info in the release part:

- http://mirror.centos.org/centos/8/virt/x86_64/advanced-virtualization/Packages/q/qemu-kvm-5.2.0-16.el8.x86_64.rpm
- http://mirror.centos.org/centos/8-stream/virt/x86_64/advancedvirt-common/Packages/q/qemu-kvm-5.2.0-16.el8s.x86_64.rpm

Comment 8 Sergio Lopez 2021-07-26 08:42:16 UTC
(In reply to Nir Soffer from comment #7)
> Segrio, what is the correct way to require this version?
> 
> Do we need to specify the entire version like this:
> 
>     qemu-kvm >= 5.2.0-16.module+el8.4.0+11721+c8bbc1be.3
> 
> Or we can use the release number?
> 
>     qemu-kvm >= 5.2.0-16
> 
> The last version is preferable, since we like to make this
> requirement work on Centos Stream and other RHEL-like builds
> that may not have the module info in the release part:
> 
> -
> http://mirror.centos.org/centos/8/virt/x86_64/advanced-virtualization/
> Packages/q/qemu-kvm-5.2.0-16.el8.x86_64.rpm
> -
> http://mirror.centos.org/centos/8-stream/virt/x86_64/advancedvirt-common/
> Packages/q/qemu-kvm-5.2.0-16.el8s.x86_64.rpm

I'd say the second form should be enough, as the release number is increased each time new patches are added.

Comment 10 errata-xmlrpc 2021-08-31 08:07:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3340


Note You need to log in before you can comment on or make changes to this bug.