Bug 1824363
Summary: | Qemu core dump when do snapshot with same node and overlay that not existed in snapshot chain | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 9 | Reporter: | aihua liang <aliang> |
Component: | qemu-kvm | Assignee: | Kevin Wolf <kwolf> |
qemu-kvm sub component: | Block Jobs | QA Contact: | aihua liang <aliang> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | medium | ||
Priority: | low | CC: | coli, jinzhao, juzhang, kwolf, mrezanin, ngu, qzhang, virt-maint |
Version: | 9.0 | Keywords: | EasyFix, Reopened, Triaged |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | qemu-kvm-6.2.0-1.el9 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-05-17 12:23:22 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
aihua liang
2020-04-16 02:31:27 UTC
As it's a negative test and it can't be triggered by libvirt, set its priority to "low" Test on qemu-kvm-5.1.0-5.module+el8.3.0+7975+b80d25f1, still hit this issue. Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release. After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. Test on qemu-kvm-6.1.0-5.el9, still hit this core dump issue. Hi, Kevin Will we plan to fix it? If yes, I will reopen it. Thanks, Aliang Oh, this one didn't even have an assignee. Yes, I'm reopening it. I'll fix it upstream and then we'll get it from the 6.2 rebase in time for 9.0-GA. Test with qemu-kvm-6.2.0-1.el9, don't hit this issue any more. Test Steps: 1.Start with qemu cmd: /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -sandbox on \ -machine q35,memory-backend=mem-machine_mem \ -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \ -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0 \ -nodefaults \ -device VGA,bus=pcie.0,addr=0x2 \ -m 30720 \ -object memory-backend-ram,size=30720M,id=mem-machine_mem \ -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2 \ -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \ -chardev socket,wait=off,id=qmp_id_qmpmonitor1,path=/tmp/monitor-qmpmonitor1-20211215-212014-u83qUkY3,server=on \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,wait=off,id=qmp_id_catch_monitor,path=/tmp/monitor-catch_monitor-20211215-212014-u83qUkY3,server=on \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=ida8F4GE \ -chardev socket,wait=off,id=chardev_serial0,path=/tmp/serial-serial0-20211215-212014-u83qUkY3,server=on \ -device isa-serial,id=serial0,chardev=chardev_serial0 \ -chardev socket,id=seabioslog_id_20211215-212014-u83qUkY3,path=/tmp/seabios-20211215-212014-u83qUkY3,server=on,wait=off \ -device isa-debugcon,chardev=seabioslog_id_20211215-212014-u83qUkY3,iobase=0x402 \ -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \ -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ --object iothread,id=iothread1 \ -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \ -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/rhel900-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \ -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \ -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \ -blockdev node-name=file_data1,driver=file,aio=threads,filename=/home/data.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_data1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_data1 \ -device virtio-blk-pci,id=data1,drive=drive_data1,write-cache=on,bus=pcie.0-root-port-6,iothread=iothread1 \ -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \ -device virtio-net-pci,mac=9a:37:88:01:97:b6,id=idvRVbq8,netdev=idSXhwTw,bus=pcie-root-port-3,addr=0x0 \ -netdev tap,id=idSXhwTw,vhost=on \ -vnc :0 \ -rtc base=utc,clock=host,driftfix=slew \ -boot menu=off,order=cdn,once=c,strict=off \ -enable-kvm \ -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \ -monitor stdio \ 2.Create target node {'execute':'blockdev-create','arguments':{'options':{'driver':'file','filename':'/root/sn1','size':21474836480},'job-id':'job1'}} {"timestamp": {"seconds": 1639722920, "microseconds": 864008}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job1"}} {"timestamp": {"seconds": 1639722920, "microseconds": 864055}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job1"}} {"return": {}} {"timestamp": {"seconds": 1639722921, "microseconds": 762531}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "job1"}} {"timestamp": {"seconds": 1639722921, "microseconds": 762575}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "job1"}} {"timestamp": {"seconds": 1639722921, "microseconds": 762596}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job1"}} {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/root/sn1'}} {"return": {}} {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':21474836480,'backing-file':'/home/data.qcow2','backing-fmt':'qcow2'},'job-id':'job2'}} {"timestamp": {"seconds": 1639722937, "microseconds": 619983}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job2"}} {"timestamp": {"seconds": 1639722937, "microseconds": 620029}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job2"}} {"return": {}} {"timestamp": {"seconds": 1639722937, "microseconds": 621572}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "job2"}} {"timestamp": {"seconds": 1639722937, "microseconds": 621602}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "job2"}} {"timestamp": {"seconds": 1639722937, "microseconds": 621622}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job2"}} {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1','backing':null}} {"return": {}} {'execute':'job-dismiss','arguments':{'id':'job1'}} {'execute':'job-dismiss','arguments':{'id':'job2'}} {"timestamp": {"seconds": 1639722951, "microseconds": 844576}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job1"}} {"return": {}} {"timestamp": {"seconds": 1639722951, "microseconds": 844937}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job2"}} {"return": {}} 3.Do snapshot from sn1 to sn1 {"execute":"blockdev-snapshot","arguments":{"node":"sn1","overlay":"sn1"}} Test Result: In step3, snapshot failed with info: {"error": {"class": "GenericError", "desc": "Making 'sn1' a backing child of 'sn1' would create a cycle"}} QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass. As comment 11 and comment 12, set bug's status to "VERIFIED". Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (new packages: qemu-kvm), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:2307 |