Bug 1508708
Summary: | [data plane] Qemu-kvm core dumped when doing block-stream and block-job-cancel to a data disk with data-plane enabled | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | yilzhang | ||||
Component: | qemu-kvm-rhev | Assignee: | Kevin Wolf <kwolf> | ||||
Status: | CLOSED ERRATA | QA Contact: | Gu Nini <ngu> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | high | ||||||
Version: | 7.5 | CC: | chayang, coli, gianluca.cecchi, juzhang, knoel, lmiksik, lolyu, ngu, qizhu, qzhang, virt-maint | ||||
Target Milestone: | rc | ||||||
Target Release: | --- | ||||||
Hardware: | All | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2019-08-22 09:18:46 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1506531 | ||||||
Attachments: |
|
Description
yilzhang
2017-11-02 01:50:18 UTC
1. x86 also has this bug 2. This bug cannot be reproduced if not using data-plane Add one qemu command line used when reproducing this bug: /usr/libexec/qemu-kvm \ -smp 8,sockets=2,cores=4,threads=1 -m 32768 \ -serial unix:/tmp/3dp-serial.log,server,nowait \ -nodefaults \ -rtc base=localtime,clock=host \ -boot menu=on \ -monitor stdio \ -monitor unix:/tmp/monitor1,server,nowait \ -qmp tcp:0:777,server,nowait \ -vnc :1 \ -device virtio-vga \ \ -device pci-bridge,id=bridge1,chassis_nr=1,bus=pci.0 \ -netdev tap,id=net0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown,vhost=on \ -device virtio-net-pci,netdev=net0,id=nic0,mac=52:54:00:c3:e7:8a,bus=bridge1,addr=0x1e \ \ -object iothread,id=iothread0 \ -device virtio-scsi-pci,bus=bridge1,addr=0x1f,id=scsi0,iothread=iothread0 \ -drive file=/home/yilzhang/rhel7.5__3.10.0-760.el7.ppc64le.qcow2,media=disk,if=none,cache=none,id=drive_sysdisk,aio=native,format=qcow2,werror=stop,rerror=stop \ -device scsi-hd,drive=drive_sysdisk,bus=scsi0.0,id=sysdisk,bootindex=0 \ \ -drive file=/home/images/disk-image_1.qcow2,if=none,cache=none,id=drive_ddisk_1,format=qcow2,werror=stop,rerror=stop \ -device scsi-hd,drive=drive_ddisk_1,bus=scsi0.0,id=ddisk_1 I'm able to reproduce this fairly easily, thanks to the information provided in the Description and Comment #3. I haven't yet tracked it to a definitive root cause, but what appears to be happening is the blk and/or blk->bs is freed before the stream_complete executes in the main loop BH. Created attachment 1352092 [details]
Reproducer test script
This is a self-contained script to reproduce this bug (I suggest running it in a test directory) - it will create all images needed, run QEMU, and send the appropriate QMP commands, and drop into GDB on segfault.
On my test system, I can reproduce it 100% via this script against both upstream QEMU and qemu-kvm-rhev-2.10.0-6.el7.
In commit 2f47da5f7f, the qemu coroutine sleep callback was changed to use aio_co_wake(), so that the coroutine can be properly scheduled in the AioContext BH, if the current AioContext does not match the current block_job/coroutine context. There is a race condition when entering the block job to cancel the coroutine. Depending on the AioContext, the coroutine will either be entered directly, or scheduled for later. We end up with a coroutine that is scheduled, and that same coroutine is deferring its cleanup to the main loop. The cleanup happens, freeing the block job, but scheduled block job runs sometime after (or perhaps during) this cleanup. The BlockJob structure, and the coroutine itself, is freed. This causes us to access invalid memory via corrupted pointers (in this case, it is happening when the late and defunct coroutine tries to run stream_complete a second time). I have a patch for fix, that works with my test script: https://github.com/codyprime/qemu-kvm-jtc/commit/9d3cecfcdfb62434452cd2ef456a3ea80ac98e9d Will be submitted to upstream list, once I get an iotest case worked out for it. Patches submitted to qemu-devel: http://lists.nongnu.org/archive/html/qemu-devel/2017-11/msg03485.html Patches posted as part of BZ 1506531. May be related to bug 1519721 From the series for BZ 1506531: Fix included in qemu-kvm-rhev-2.10.0-12.el7 Test with qemu-kvm-rhev-2.10.0-12.el7, still hit core dump when quit qemu: Command line: usr/libexec/qemu-kvm -M q35,accel=kvm,kernel-irqchip=split -m 4G -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,iothread=iothread1 -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=writethrough,format=qcow2,file=gluster://10.66.8.119/gv0/win2016-64-virtio-scsi.qcow2 -device scsi-hd,id=image1,drive=drive_image1 -device virtio-net-pci,mac=9a:89:8a:8b:8c:8d,id=idTqeiKU,netdev=idU17IIx,bus=pcie.0 -object iothread,id=iothread1 -netdev tap,id=idU17IIx -cpu 'SandyBridge',+kvm_pv_unhalt -vnc :0 -enable-kvm -qmp tcp::5555,server,nowait -monitor stdio -object iothread,id=iothread2 -object iothread,id=iothread3 -device virtio-scsi-pci,id=virtio_scsi_pci1,bus=pcie.0,iothread=iothread2 -drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=writethrough,format=qcow2,file=gluster://10.66.8.119/gv0/data1.qcow2 -device scsi-hd,id=image2,drive=drive_image2 Steps: { "execute": "blockdev-snapshot-sync", "arguments": { "device": "drive_image2","snapshot-file": "sn1-data", "format": "qcow2", "mode": "absolute-paths" } } {"return": {}} { "execute": "block-stream", "arguments": { "device": "drive_image2"}} {"return": {}} { "execute": "block-job-cancel", "arguments": { "device": "drive_image2"}} {"return": {}} Result: (qemu) quit qemu-kvm: /builddir/build/BUILD/qemu-2.10.0/hw/scsi/virtio-scsi.c:246: virtio_scsi_ctx_check: Assertion `blk_get_aio_context(d->conf.blk) == s->ctx' failed. Aborted (core dumped) (gdb) bt full #0 0x00007fb7b36911a7 in raise () at /lib64/libc.so.6 #1 0x00007fb7b3692898 in abort () at /lib64/libc.so.6 #2 0x00007fb7b3689fc8 in __assert_fail_base () at /lib64/libc.so.6 #3 0x00007fb7b368a074 in () at /lib64/libc.so.6 #4 0x000056216110a4c7 in virtio_scsi_ctx_check (s=<optimized out>, s=<optimized out>, d=0x5621658ad680) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi.c:246 #5 0x000056216118fb16 in virtio_scsi_handle_cmd_vq (s=<optimized out>, s=<optimized out>, d=0x5621658ad680) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi.c:246 vs = 0x562165a68170 rc = <optimized out> req = 0x5621654f6c00 next = <optimized out> ret = <optimized out> progress = true reqs = {tqh_first = 0x0, tqh_last = 0x7fb7abdf67b0} #6 0x000056216118fb16 in virtio_scsi_handle_cmd_vq (req=0x5621654f6c00, s=0x562165a68170) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi.c:559 vs = 0x562165a68170 rc = <optimized out> req = 0x5621654f6c00 next = <optimized out> ret = <optimized out> progress = true reqs = {tqh_first = 0x0, tqh_last = 0x7fb7abdf67b0} #7 0x000056216118fb16 in virtio_scsi_handle_cmd_vq (s=s@entry=0x562165a68170, vq=vq@entry=0x562165a70100) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi.c:599 req = 0x5621654f6c00 next = <optimized out> ret = <optimized out> progress = true reqs = {tqh_first = 0x0, tqh_last = 0x7fb7abdf67b0} #8 0x00005621611906fa in virtio_scsi_data_plane_handle_cmd (vdev=<optimized out>, vq=0x562165a70100) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi-dataplane.c:60 progress = <optimized out> s = 0x562165a68170 #9 0x000056216119ced6 in virtio_queue_host_notifier_aio_poll (vq=0x562165a70100) at /usr/src/debug/qemu-2.10.0/hw/virtio/virtio.c:1506 n = 0x562165a70168 vq = 0x562165a70100 progress = <optimized out> #10 0x000056216119ced6 in virtio_queue_host_notifier_aio_poll (opaque=0x562165a70168) at /usr/src/debug/qemu-2.10.0/hw/virtio/virtio.c:2420 n = 0x562165a70168 vq = 0x562165a70100 progress = <optimized out> #11 0x000056216142f55e in run_poll_handlers_once (ctx=ctx@entry=0x562163195cc0) at util/aio-posix.c:490 progress = false node = 0x5621653c9740 #12 0x000056216142ff85 in aio_poll (blocking=true, ctx=0x562163195cc0) at util/aio-posix.c:566 node = <optimized out> i = <optimized out> ret = 0 progress = <optimized out> timeout = <optimized out> start = 183182899378971 __PRETTY_FUNCTION__ = "aio_poll" ---Type <return> to continue, or q <return> to quit--- #13 0x000056216142ff85 in aio_poll (ctx=0x562163195cc0, blocking=blocking@entry=true) at util/aio-posix.c:595 node = <optimized out> i = <optimized out> ret = 0 progress = <optimized out> timeout = <optimized out> start = 183182899378971 __PRETTY_FUNCTION__ = "aio_poll" #14 0x000056216122415e in iothread_run (opaque=0x5621630e3500) at iothread.c:59 iothread = 0x5621630e3500 #15 0x00007fb7b3a2fdd5 in start_thread () at /lib64/libpthread.so.0 #16 0x00007fb7b375994d in clone () at /lib64/libc.so.6 With latest qemu-kvm-rhev-2.10.0-14.el7.x86_64, I still hit once core dump when perform block stream, not even to cancel it. It is not 100% reproducible, only 1/10. Steps: 1. Create a disk: qemu-img create -f qcow2 gluster://10.66.8.119/gv0/data1.qcow2 2G 2. Launch guest with the disk, with iothread: /usr/libexec/qemu-kvm \ -M q35,accel=kvm,kernel-irqchip=split \ -m 4G \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,iothread=iothread1 \ -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=writethrough,format=qcow2,file=/home/kvm_autotest_root/images/rhel75-64-virtio-scsi.qcow2 \ -device scsi-hd,id=image1,drive=drive_image1 -device virtio-net-pci,mac=9a:89:8a:8b:8c:8d,id=idTqeiKU,netdev=idU17IIx,bus=pcie.0 \ -object iothread,id=iothread1 \ -netdev tap,id=idU17IIx \ -cpu 'SandyBridge',+kvm_pv_unhalt \ -vnc :0 \ -enable-kvm \ -qmp tcp::5555,server,nowait \ -monitor stdio \ -object iothread,id=iothread2 \ -object iothread,id=iothread3 \ -device virtio-scsi-pci,id=virtio_scsi_pci1,bus=pcie.0,iothread=iothread2 \ -drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=writethrough,format=qcow2,file=gluster://10.66.8.119/gv0/data1.qcow2 \ -device scsi-hd,id=image2,drive=drive_image2 3. Live snapshot and block-stream: { "execute": "blockdev-snapshot-sync", "arguments": { "device": "drive_image2","snapshot-file": "sn1-data", "format": "qcow2", "mode": "absolute-paths" } } {"return": {}} { "execute": "block-stream", "arguments": { "device": "drive_image2"}} {"return": {}} {"timestamp": {"seconds": 1514958794, "microseconds": 42977}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive_image2", "len": 2147483648, "offset": 2147483648, "speed": 0, "type": "stream"}} Result: (qemu) Formatting 'sn1-data', fmt=qcow2 size=2147483648 backing_file=gluster://10.66.8.119/gv0/data1.qcow2 backing_fmt=qcow2 cluster_size=65536 lazy_refcounts=off refcount_bits=16 (qemu) quit qemu-kvm: /builddir/build/BUILD/qemu-2.10.0/hw/scsi/virtio-scsi.c:246: virtio_scsi_ctx_check: Assertion `blk_get_aio_context(d->conf.blk) == s->ctx' failed. Aborted (core dumped) (gdb) bt full #0 0x00007fb4856651a7 in raise () at /lib64/libc.so.6 #1 0x00007fb485666898 in abort () at /lib64/libc.so.6 #2 0x00007fb48565dfc8 in __assert_fail_base () at /lib64/libc.so.6 #3 0x00007fb48565e074 in () at /lib64/libc.so.6 #4 0x000055be1e2ce877 in virtio_scsi_ctx_check (s=<optimized out>, s=<optimized out>, d=0x55be223dd680) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi.c:246 #5 0x000055be1e353ec6 in virtio_scsi_handle_cmd_vq (s=<optimized out>, s=<optimized out>, d=0x55be223dd680) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi.c:246 vs = 0x55be22598170 rc = <optimized out> req = 0x55be21718500 next = <optimized out> ret = <optimized out> progress = true reqs = {tqh_first = 0x0, tqh_last = 0x7fb47ddca7b0} #6 0x000055be1e353ec6 in virtio_scsi_handle_cmd_vq (req=0x55be21718500, s=0x55be22598170) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi.c:559 vs = 0x55be22598170 rc = <optimized out> req = 0x55be21718500 next = <optimized out> ret = <optimized out> progress = true reqs = {tqh_first = 0x0, tqh_last = 0x7fb47ddca7b0} #7 0x000055be1e353ec6 in virtio_scsi_handle_cmd_vq (s=s@entry=0x55be22598170, vq=vq@entry=0x55be225a2100) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi.c:599 req = 0x55be21718500 next = <optimized out> ret = <optimized out> progress = true reqs = {tqh_first = 0x0, tqh_last = 0x7fb47ddca7b0} #8 0x000055be1e354aaa in virtio_scsi_data_plane_handle_cmd (vdev=<optimized out>, vq=0x55be225a2100) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi-dataplane.c:60 progress = <optimized out> s = 0x55be22598170 #9 0x000055be1e3612d6 in virtio_queue_host_notifier_aio_poll (vq=0x55be225a2100) at /usr/src/debug/qemu-2.10.0/hw/virtio/virtio.c:1506 n = 0x55be225a2168 vq = 0x55be225a2100 progress = <optimized out> #10 0x000055be1e3612d6 in virtio_queue_host_notifier_aio_poll (opaque=0x55be225a2168) at /usr/src/debug/qemu-2.10.0/hw/virtio/virtio.c:2420 n = 0x55be225a2168 vq = 0x55be225a2100 progress = <optimized out> #11 0x000055be1e5f483e in run_poll_handlers_once (ctx=ctx@entry=0x55be1fcc5cc0) at util/aio-posix.c:497 progress = false node = 0x55be20add9e0 #12 0x000055be1e5f5285 in aio_poll (blocking=true, ctx=0x55be1fcc5cc0) at util/aio-posix.c:573 node = <optimized out> i = <optimized out> ret = 0 progress = <optimized out> timeout = <optimized out> start = 84268908368309 __PRETTY_FUNCTION__ = "aio_poll" ---Type <return> to continue, or q <return> to quit--- #13 0x000055be1e5f5285 in aio_poll (ctx=0x55be1fcc5cc0, blocking=blocking@entry=true) at util/aio-posix.c:602 node = <optimized out> i = <optimized out> ret = 0 progress = <optimized out> timeout = <optimized out> start = 84268908368309 __PRETTY_FUNCTION__ = "aio_poll" #14 0x000055be1e3e8876 in iothread_run (opaque=0x55be1fccf260) at iothread.c:59 iothread = 0x55be1fccf260 #15 0x00007fb485a03dd5 in start_thread () at /lib64/libpthread.so.0 #16 0x00007fb48572d94d in clone () at /lib64/libc.so.6 I'm not able to reproduce your failings on either qemu-kvm-rhev-2.10.0-14.el7.x86_64 or qemu-kvm-rhev-2.10.0-12.el7.x86_64 (or qemu-kvm-rhev-2.10.0-16.el7.x86_64). Would it be possible for me to get access to your test machine? Hit 3 times same core dump during automation test on qemu-kvm-rhev-2.10.0-16.el7.x86_64. Hit one time similar core dump during block-commit on qemu-kvm-rhev-2.10.0-21.el7_5.1.x86_64 (gdb) bt #0 0x00007f53acc13207 in raise () at /lib64/libc.so.6 #1 0x00007f53acc148f8 in abort () at /lib64/libc.so.6 #2 0x00007f53acc0c026 in __assert_fail_base () at /lib64/libc.so.6 #3 0x00007f53acc0c0d2 in () at /lib64/libc.so.6 #4 0x00005642aadea517 in virtio_scsi_ctx_check (s=<optimized out>, s=<optimized out>, d=0x5642ae7d8500) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi.c:246 #5 0x00005642aae6f936 in virtio_scsi_handle_cmd_vq (s=<optimized out>, s=<optimized out>, d=0x5642ae7d8500) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi.c:246 #6 0x00005642aae6f936 in virtio_scsi_handle_cmd_vq (req=0x5642aecd8d80, s=0x5642af61a170) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi.c:559 #7 0x00005642aae6f936 in virtio_scsi_handle_cmd_vq (s=s@entry=0x5642af61a170, vq=vq@entry=0x5642af624100) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi.c:599 #8 0x00005642aae7051a in virtio_scsi_data_plane_handle_cmd (vdev=<optimized out>, vq=0x5642af624100) at /usr/src/debug/qemu-2.10.0/hw/scsi/virtio-scsi-dataplane.c:60 #9 0x00005642aae7cd56 in virtio_queue_host_notifier_aio_poll (vq=0x5642af624100) at /usr/src/debug/qemu-2.10.0/hw/virtio/virtio.c:1506 #10 0x00005642aae7cd56 in virtio_queue_host_notifier_aio_poll (opaque=0x5642af624168) at /usr/src/debug/qemu-2.10.0/hw/virtio/virtio.c:2420 #11 0x00005642ab11106e in run_poll_handlers_once (ctx=ctx@entry=0x5642ad99bb80) at util/aio-posix.c:497 #12 0x00005642ab111ab5 in aio_poll (blocking=true, ctx=0x5642ad99bb80) at util/aio-posix.c:573 #13 0x00005642ab111ab5 in aio_poll (ctx=0x5642ad99bb80, blocking=blocking@entry=true) at util/aio-posix.c:602 #14 0x00005642aaf046a6 in iothread_run (opaque=0x5642ad9a5260) at iothread.c:59 #15 0x00007f53acfb1dd5 in start_thread () at /lib64/libpthread.so.0 #16 0x00007f53accdbb3d in clone () at /lib64/libc.so.6 Hi Jeff, sorry that I need to release ibm-x3250m6-10.lab.eng.pek2.redhat.com for other usage, is that okay with you? We can set up another reproducible environment when you need. could reproduce this issue as ngu mentioned with the following components: qemu-kvm-rhev-2.12.0-7.el7.x86_64 kernel-3.10.0-918.el7.x86_64 1. boot up guest /usr/libexec/qemu-kvm \ -name guest=test-virt0 \ -machine pc,accel=kvm,usb=off,vmport=off,dump-guest-core=off \ -cpu SandyBridge \ -m 4G \ -smp 4,sockets=4,cores=1,threads=1 \ -boot strict=on \ -object iothread,id=iothread0 \ -device virtio-scsi-pci,bus=pci.0,addr=0x5,iothread=iothread0,id=scsi0 \ -drive file=/home/kvm_autotest_root/images/rhel76-64-virtio-scsi.qcow2,format=qcow2,snapshot=off,cache=none,if=none,aio=native,id=img0 \ -device scsi-hd,bus=scsi0.0,drive=img0,scsi-id=0,lun=0,id=scsi-disk0,bootindex=0 \ -netdev tap,id=hostnet0,vhost=on \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=51:54:12:b3:20:61,bus=pci.0,addr=0x3 \ -device qxl-vga \ -vnc :1 \ -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 \ -monitor stdio \ -qmp tcp:0:4444,server,nowait \ -usbdevice tablet \ 2. create snapshot then block stream { "execute": "qmp_capabilities" } { "execute": "blockdev-snapshot-sync", "arguments": { "device": "img0","snapshot-file": "sn1.qcow2", "format": "qcow2" } } { "execute": "block-stream", "arguments": { "device": "img0"}} Result: guest hangs. bt info: # gdb -batch -ex bt -p 23305 [New LWP 23341] [New LWP 23338] [New LWP 23324] [New LWP 23323] [New LWP 23322] [New LWP 23321] [New LWP 23319] [New LWP 23307] [New LWP 23306] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". 0x00007f22cef032cf in __GI_ppoll (fds=0x55fd8e9383c0, nfds=2, timeout=<optimized out>, timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:56 56 result = INLINE_SYSCALL (ppoll, 5, fds, nfds, timeout, sigmask, #0 0x00007f22cef032cf in __GI_ppoll (fds=0x55fd8e9383c0, nfds=2, timeout=<optimized out>, timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:56 #1 0x000055fd8cc1a7cb in qemu_poll_ns (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77 #2 0x000055fd8cc1a7cb in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=timeout@entry=-1) at util/qemu-timer.c:322 #3 0x000055fd8cc1c515 in aio_poll (ctx=0x55fd8e8677c0, blocking=blocking@entry=true) at util/aio-posix.c:629 #4 0x000055fd8cb97caa in bdrv_flush (bs=bs@entry=0x55fd8e93c000) at block/io.c:2531 #5 0x000055fd8cb489ab in bdrv_unref (bs=0x55fd8e93c000) at block.c:3326 #6 0x000055fd8cb489ab in bdrv_unref (bs=0x55fd8e93c000) at block.c:3514 #7 0x000055fd8cb489ab in bdrv_unref (bs=0x55fd8e93c000) at block.c:4614 #8 0x000055fd8cb4bf44 in block_job_remove_all_bdrv (job=job@entry=0x55fd8e848c40) at blockjob.c:177 #9 0x000055fd8cb4bf93 in block_job_free (job=0x55fd8e848c40) at blockjob.c:94 #10 0x000055fd8cb4d3dd in job_unref (job=0x55fd8e848c40) at job.c:367 #11 0x000055fd8cb4d5e8 in job_finalize_single (job=0x55fd8e848c40) at job.c:654 #12 0x000055fd8cb4d5e8 in job_finalize_single (job=0x55fd8e848c40) at job.c:722 #13 0x000055fd8cb4cc50 in job_txn_apply (fn=0x55fd8cb4d4e0 <job_finalize_single>, lock=true, txn=<optimized out>) at job.c:150 #14 0x000055fd8ca1aa67 in stream_complete (job=0x55fd8e848c40, opaque=0x55fd8ecb26a8) at block/stream.c:96 #15 0x000055fd8cb4caf2 in job_defer_to_main_loop_bh (opaque=0x55fd9082efc0) at job.c:968 #16 0x000055fd8cc19331 in aio_bh_poll (bh=0x55fd8e920030) at util/async.c:90 #17 0x000055fd8cc19331 in aio_bh_poll (ctx=ctx@entry=0x55fd8e8677c0) at util/async.c:118 #18 0x000055fd8cc1c3d0 in aio_dispatch (ctx=0x55fd8e8677c0) at util/aio-posix.c:436 #19 0x000055fd8cc1920e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:261 #20 0x00007f22e75a2049 in g_main_context_dispatch (context=0x55fd8e8baa50) at gmain.c:3175 #21 0x00007f22e75a2049 in g_main_context_dispatch (context=context@entry=0x55fd8e8baa50) at gmain.c:3828 #22 0x000055fd8cc1b6d7 in main_loop_wait () at util/main-loop.c:215 #23 0x000055fd8cc1b6d7 in main_loop_wait (timeout=<optimized out>) at util/main-loop.c:238 #24 0x000055fd8cc1b6d7 in main_loop_wait (nonblocking=nonblocking@entry=0) at util/main-loop.c:497 #25 0x000055fd8c8c2f27 in main () at vl.c:1963 #26 0x000055fd8c8c2f27 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4768 Hi Jeff, Nini, Looks like the host ibm-x3250m6-10.lab.eng.pek2.redhat.com can be released, I am going to clean the environment, is that okay with you? (In reply to Qianqian Zhu from comment #35) > Hi Jeff, Nini, > Looks like the host ibm-x3250m6-10.lab.eng.pek2.redhat.com can be released, > I am going to clean the environment, is that okay with you? Qianqian, It's ok for me. I can reproduce current issue with any host. (In reply to Qianqian Zhu from comment #35) > Hi Jeff, Nini, > Looks like the host ibm-x3250m6-10.lab.eng.pek2.redhat.com can be released, > I am going to clean the environment, is that okay with you? Yes, go ahead and release, thanks! Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:2553 |