Bug 2009236
Summary: | Qemu coredump when backup with x-perf | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | aihua liang <aliang> | |
Component: | qemu-kvm | Assignee: | Stefano Garzarella <sgarzare> | |
qemu-kvm sub component: | Incremental Live Backup | QA Contact: | aihua liang <aliang> | |
Status: | CLOSED ERRATA | Docs Contact: | ||
Severity: | high | |||
Priority: | high | CC: | coli, jinzhao, jmaloy, juzhang, kkiwi, ngu, qzhang, virt-maint | |
Version: | 8.6 | Keywords: | Triaged | |
Target Milestone: | rc | |||
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | qemu-kvm-6.2.0-1.module+el8.6.0+13725+61ae1949 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 2009310 (view as bug list) | Environment: | ||
Last Closed: | 2022-05-10 13:21:40 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 2027716 | |||
Bug Blocks: | 2009310 |
Description
aihua liang
2021-09-30 08:41:37 UTC
qemu-kvm-6.0.0-30.module+el8.5.0+12586+476da3e1 also hit this issue. Stefano, can you take this one? Setting priority to High. (In reply to Klaus Heinrich Kiwi from comment #2) > Stefano, can you take this one? Setting priority to High. Yep, I'll assign to me also the RHEL 9 BZ2009310. I'll update on that and use this one only for the backport/rebase info. Mass update of DTM/ITM to +3 values since the rebase of qemu-6.2 into RHEL 8.6 has been delayed or slowed due to process roadblocks (authentication changes, gating issues). This avoids the DevMissed bot and worse the bot that could come along and strip release+. The +3 was chosen mainly to give a cushion. Also added the qemu-6.2 rebase bug 2027716 as a dependent. QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass. Test on qemu-kvm-6.2.0-1.module+el8.6.0+13725+61ae1949, don't hit this issue any more, set bug's status to "VERIFIED" 1.Start guest with qemu cmds: /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -sandbox on \ -machine q35,memory-backend=mem-machine_mem \ -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \ -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0 \ -nodefaults \ -device VGA,bus=pcie.0,addr=0x2 \ -m 30720 \ -object memory-backend-ram,size=30720M,id=mem-machine_mem \ -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2 \ -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \ -chardev socket,server=on,id=qmp_id_qmpmonitor1,wait=off,path=/tmp/monitor-qmpmonitor1-20210927-235344-uQJJ6aFc \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,server=on,id=qmp_id_catch_monitor,wait=off,path=/tmp/monitor-catch_monitor-20210927-235344-uQJJ6aFc \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=idmerT3z \ -chardev socket,server=on,id=chardev_serial0,wait=off,path=/tmp/serial-serial0-20210927-235344-uQJJ6aFc \ -device isa-serial,id=serial0,chardev=chardev_serial0 \ -chardev socket,id=seabioslog_id_20210927-235344-uQJJ6aFc,path=/tmp/seabios-20210927-235344-uQJJ6aFc,server=on,wait=off \ -device isa-debugcon,chardev=seabioslog_id_20210927-235344-uQJJ6aFc,iobase=0x402 \ -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \ -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \ -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/rhel850-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \ -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \ -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \ -device virtio-net-pci,mac=9a:c0:90:a7:2f:40,id=idjZ1FPm,netdev=id4rOUG1,bus=pcie-root-port-3,addr=0x0 \ -netdev tap,id=id4rOUG1,vhost=on \ -vnc :0 \ -rtc base=utc,clock=host,driftfix=slew \ -boot menu=off,order=cdn,once=c,strict=off \ -enable-kvm \ -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \ -monitor stdio \ -qmp tcp:0:3000,server=on,wait=off \ 2. Create backup target node {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn$i','size':21474836480},'job-id':'job1'}} {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn$i','filename':'/root/sn$i'}} {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn$i','size':21474836480},'job-id':'job2'}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn$i','file':'drive_sn$i'}} {'execute':'job-dismiss','arguments':{'id':'job1'}} {'execute':'job-dismiss','arguments':{'id':'job2'}} 3.Do backup with x-perf option {"execute": "blockdev-backup", "arguments": {"device": "drive_image1","job-id": "j1", "sync": "full", "target": "sn1","x-perf":{"use-copy-range":true,"max-workers":4294967296,"max-chunk":4294967296}}} Test result: After step3, backup not started for error: {"error": {"class": "GenericError", "desc": "max-workers must be between 1 and 2147483647"}} Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1759 |