Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 821594

Summary: soft lockup - CPU#0 stuck for 67s! [migration/0:5] w/150 virtio NICs
Product: Red Hat Enterprise Linux 7 Reporter: FuXiangChun <xfu>
Component: qemu-kvmAssignee: jason wang <jasowang>
Status: CLOSED WONTFIX QA Contact: yduan
Severity: medium Docs Contact:
Priority: medium    
Version: 7.0CC: ailan, chayang, encharamurthy, jasowang, jinzhao, juzhang, knoel, michen, mkenneth, mst, qzhang, virt-bugs, virt-maint, xfu
Target Milestone: rcKeywords: Reopened
Target Release: ---Flags: xfu: needinfo-
Hardware: x86_64   
OS: Linux   
Whiteboard: scalability
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-26 03:00:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
boot guest command line
none
boot guest command line
none
cpu stuck screenshot
none
backtrace of soft lockup none

Description FuXiangChun 2012-05-15 05:21:33 UTC
Description of problem:
boot guest with multifunction=on option and more than 150 virtio Nics. guest shows "CPU#0 stuck for 67s!" during guest shutdown.  

Version-Release number of selected component (if applicable):
#rpm -qa|grep qemu
qemu-kvm-0.12.1.2-2.292.el6.x86_64
# uname -r
2.6.32-270.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1.cli is in attachment
2.shutdown guest in guest
#init 0
3.
  
Actual results:
Halting system...
BUG: soft lockup - CPU#0 stuck for 67s! [migration/0:5]
irq 11: nobody cared (try booting with the "irqpoll" option)
handlers:
[<ffffffffa00636b0>] (vp_interrupt+0x0/0x60 [virtio_pci])
[<ffffffffa00636b0>] (vp_interrupt+0x0/0x60 [virtio_pci])
[<ffffffffa00636b0>] (vp_interrupt+0x0/0x60 [virtio_pci])
[<ffffffffa00636b0>] (vp_interrupt+0x0/0x60 [virtio_pci])
[<ffffffffa00636b0>] (vp_interrupt+0x0/0x60 [virtio_pci])
[<ffffffffa00636b0>] (vp_interrupt+0x0/0x60 [virtio_pci])
[<ffffffffa00636b0>] (vp_interrupt+0x0/0x60 [virtio_pci])
............

Expected results:
shutdown successful and not print cpu stuck message.

Additional info:
rtl8139 has the same issue.

Comment 1 FuXiangChun 2012-05-15 05:24:14 UTC
Created attachment 584536 [details]
boot guest command line

if boot guest without multifunction, then only can support 29 nics. guest shutdown normally

Comment 2 Michael S. Tsirkin 2012-05-20 09:48:59 UTC
which guest kernel version was used?

Comment 3 FuXiangChun 2012-05-21 01:57:26 UTC
(In reply to comment #2)
> which guest kernel version was used?

guest kernel version:
# uname -r
2.6.32-270.el6.x86_64

host kernel version:
# uname -r
2.6.32-270.el6.x86_64

Comment 4 Alex Williamson 2012-05-31 22:15:29 UTC
Please include your /etc/qemu-ifup and if connecting to a bridge, how that bridge is configured.  Also, how are the devices configured in the guest, ie. are they all brought up, are they all set to dhcp?  I'm unable to reproduce so far.   I can create the full number of devices and connect them to the virbr0.  I'm currently only configuring eth0 in the guest, but I'll iterate through that next.

Comment 5 Alex Williamson 2012-05-31 23:08:53 UTC
(In reply to comment #1)
> Created attachment 584536 [details]
> boot guest command line
> 
> if boot guest without multifunction, then only can support 29 nics. guest
> shutdown normally

...
-netdev tap,id=hostnet148,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet148,mac=00:44:58:04:44:54,bus=pci.0,id=virtio-net-pci148,multifunction=on,addr=0x15.4  \
-netdev tap,id=hostnet149,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet149,mac=00:44:58:04:45:55,bus=pci.0,id=virtio-net-pci149,multifunction=on,addr=0x15.5  \
-netdev tap,id=hostnet150,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet150,mac=00:44:58:04:46:56,bus=pci.0,id=virtio-net-pci150,multifunction=on,addr=0x15.6  \
-netdev tap,id=hostnet151,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet151,mac=00:44:58:04:47:57,bus=pci.0,id=virtio-net-pci151,multifunction=on,addr=0x15.7  \

Why did we skip PCI slots 0x16-0x18 here?

All of the below MAC address to the end of the file are invalid.  Please see: http://en.wikipedia.org/wiki/MAC_address

The first bit of the most significant octet indicates a multicast MAC address.  This is not what you want.  Please retest with valid MAC addresses.  The bogus MAC addresses start suspiciously close to the point where you say things break.

-netdev tap,id=hostnet156,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet156,mac=01:45:59:05:40:50,bus=pci.0,id=virtio-net-pci156,multifunction=on,addr=0x19.0  \
-netdev tap,id=hostnet157,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet157,mac=01:45:59:05:41:51,bus=pci.0,id=virtio-net-pci157,multifunction=on,addr=0x19.1  \
-netdev tap,id=hostnet158,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet158,mac=01:45:59:05:42:52,bus=pci.0,id=virtio-net-pci158,multifunction=on,addr=0x19.2  \
...

Comment 6 Alex Williamson 2012-06-01 01:43:04 UTC
(In reply to comment #5)
> 
> Why did we skip PCI slots 0x16-0x18 here?

Nevermind, I see these are later

Comment 8 FuXiangChun 2012-06-01 03:23:35 UTC
Created attachment 588268 [details]
boot guest command line

Comment 9 FuXiangChun 2012-06-01 03:24:33 UTC
Created attachment 588269 [details]
cpu stuck screenshot

Comment 11 Alex Williamson 2012-06-01 23:02:07 UTC
Created attachment 588594 [details]
backtrace of soft lockup

This is not as easy to reproduce as it looks.  I believe the key is to have traffic going to the devices and have a lot of them (approx 56 or more) up, because the problem seems to be triggerd by device interrupts during shutdown.  I was finally able to get it to trigger by flood pinging the broadcast address through the bridge while the guest was shutting down.  Having the right NetworkManager packages installed is also key as nm will bring up the interfaces, but not take them down.

So, why do we need so many devices?  There are only a fixed number of device vectors available (bit under 200).  Each virtio-net-pci NIC tries to use 3 for MSI.  Once the device interrupts are exhausted, the remaining devices use legacy INTx interrupts.  On shutdown:

static void pci_device_shutdown(struct device *dev)
{
        struct pci_dev *pci_dev = to_pci_dev(dev);
        struct pci_driver *drv = pci_dev->driver;

        if (drv && drv->shutdown)
                drv->shutdown(pci_dev);
        pci_msi_shutdown(pci_dev);
        pci_msix_shutdown(pci_dev);
}

Guess what drivers don't have a .shutdown in their struct pci_driver...  Yep, virtio-pci and 8139cp (NB I haven't reproduced this yet with rtl8139).  So that means when we have fewer virtio-net-pci devices, PCI core disables MSI/X for us and those interrupt handlers are shutdown, but nothing stops the remaining devices using INTx from running.

I suspect the fix for this will be to create .shutdown hooks for drivers missing them that does a free_irq or disable_irq for INTx.  I still don't know what it appears that the ioread8 is blocked though.

Comment 12 RHEL Program Management 2012-07-10 07:00:44 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 13 RHEL Program Management 2012-07-11 02:07:45 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Comment 18 Ronen Hod 2014-05-12 06:09:51 UTC
Closing this bug for RHEL6. We will never get to it.
QE, can you try to see if the bug exists in RHEL7 too.

Comment 19 juzhang 2014-05-12 06:12:02 UTC
Hi Xiangchun,

Could you have a try this scenario in rhel7.0?

Best Regards,
Junyi

Comment 20 FuXiangChun 2014-05-13 14:49:53 UTC
According to qemu-kvm command line in comment8. Re-tested this bug with RHEL7.0 guest on RHEL7.0 host. host and guest kernel version is 3.10.0-123.el7.x86_64.

result:
Guest boot failure, and guest console repeated content as below during booting.

Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[  630.785600] BUG: soft lockup - CPU#0 stuck for 27s! [kworker/0:1:43]
[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;-1f[-1;-1f Red Hat Enterprise Linux Server 7.0 (Maipo) dracut-033-161.el7 (Initramfs)[-1;

Comment 21 juzhang 2014-05-14 01:13:26 UTC
Reopen this bz according to comment20 and update the project to rhel7.0

Comment 23 jason wang 2014-07-14 07:35:44 UTC
May related to inefficient memory API. Could you pleas try upstream qemu.git to see if this issue still exist?

Comment 24 FuXiangChun 2014-07-15 05:58:22 UTC
(In reply to jason wang from comment #23)
> May related to inefficient memory API. Could you pleas try upstream qemu.git
> to see if this issue still exist?

According to qemu cli in comment 8. Re-tested this bug with qemu-kvm-rhev-2.1.0-1.el7ev.preview.x86_64. 

result:
qemu-kvm core dump. 

(gdb) bt
#0  0x00007fa307073989 in raise () from /usr/lib64/libc.so.6
#1  0x00007fa307075098 in abort () from /usr/lib64/libc.so.6
#2  0x00007fa30706c8f6 in __assert_fail_base () from /usr/lib64/libc.so.6
#3  0x00007fa30706c9a2 in __assert_fail () from /usr/lib64/libc.so.6
#4  0x00007fa30cca0c8d in memory_region_del_eventfd (mr=mr@entry=0x7fa31039cff8, addr=addr@entry=16, size=size@entry=2, 
    match_data=match_data@entry=true, data=data@entry=0, e=e@entry=0x7fa31035ddd0) at /usr/src/debug/qemu-2.1.0/memory.c:1614
#5  0x00007fa30ce3dad9 in virtio_pci_set_host_notifier_internal (proxy=0x7fa31039c7b0, n=0, assign=<optimized out>, 
    set_handler=<optimized out>) at hw/virtio/virtio-pci.c:202
#6  0x00007fa30ccc9761 in vhost_dev_disable_notifiers (hdev=hdev@entry=0x7fa30f11ac30, vdev=vdev@entry=0x7fa31039d188)
    at /usr/src/debug/qemu-2.1.0/hw/virtio/vhost.c:964
#7  0x00007fa30ccc126b in vhost_net_start_one (vq_index=0, dev=0x7fa31039d188, net=0x7fa30f11ac30)
    at /usr/src/debug/qemu-2.1.0/hw/net/vhost_net.c:249
#8  vhost_net_start (dev=dev@entry=0x7fa31039d188, ncs=0x7fa310284710, total_queues=total_queues@entry=1)
    at /usr/src/debug/qemu-2.1.0/hw/net/vhost_net.c:312
#9  0x00007fa30ccbd67d in virtio_net_vhost_status (status=7 '\a', n=0x7fa31039d188)
    at /usr/src/debug/qemu-2.1.0/hw/net/virtio-net.c:133
#10 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at /usr/src/debug/qemu-2.1.0/hw/net/virtio-net.c:152
#11 0x00007fa30ccc5558 in virtio_set_status (vdev=vdev@entry=0x7fa31039d188, val=val@entry=7 '\a')
    at /usr/src/debug/qemu-2.1.0/hw/virtio/virtio.c:550
#12 0x00007fa30ce3e870 in virtio_ioport_write (val=7, addr=<optimized out>, opaque=0x7fa31039c7b0) at hw/virtio/virtio-pci.c:306
#13 virtio_pci_config_write (opaque=0x7fa31039c7b0, addr=<optimized out>, val=7, size=<optimized out>) at hw/virtio/virtio-pci.c:430
#14 0x00007fa30cc9cf4a in access_with_adjusted_size (addr=addr@entry=18, value=value@entry=0x7fa2fe785af0, size=size@entry=1, 
    access_size_min=<optimized out>, access_size_max=<optimized out>, access=0x7fa30cc9d0c0 <memory_region_write_accessor>, 
    mr=0x7fa31039cff8) at /usr/src/debug/qemu-2.1.0/memory.c:481
#15 0x00007fa30cca1b17 in memory_region_dispatch_write (size=1, data=7, addr=18, mr=0x7fa31039cff8)
    at /usr/src/debug/qemu-2.1.0/memory.c:1143
#16 io_mem_write (mr=mr@entry=0x7fa31039cff8, addr=18, val=<optimized out>, size=1) at /usr/src/debug/qemu-2.1.0/memory.c:1976
#17 0x00007fa30cc6ce93 in address_space_rw (as=0x7fa30d2fb900 <address_space_io>, addr=addr@entry=49810, 
    buf=0x7fa30cb93000 <Address 0x7fa30cb93000 out of bounds>, len=len@entry=1, is_write=is_write@entry=true)
    at /usr/src/debug/qemu-2.1.0/exec.c:2054
#18 0x00007fa30cc9c3d0 in kvm_handle_io (count=1, size=1, direction=<optimized out>, data=<optimized out>, port=49810)
---Type <return> to continue, or q <return> to quit---
    at /usr/src/debug/qemu-2.1.0/kvm-all.c:1600
#19 kvm_cpu_exec (cpu=cpu@entry=0x7fa3100ed950) at /usr/src/debug/qemu-2.1.0/kvm-all.c:1737
#20 0x00007fa30cc8b582 in qemu_kvm_cpu_thread_fn (arg=0x7fa3100ed950) at /usr/src/debug/qemu-2.1.0/cpus.c:874
#21 0x00007fa30b741df3 in start_thread () from /usr/lib64/libpthread.so.0
#22 0x00007fa3071343dd in clone () from /usr/lib64/libc.so.6

Is this a new issue for upstream qemu?  Do QE need to file another bug to track it for upstream qemu?

Comment 25 jason wang 2014-07-15 08:12:08 UTC
(In reply to FuXiangChun from comment #24)
> (In reply to jason wang from comment #23)
> > May related to inefficient memory API. Could you pleas try upstream qemu.git
> > to see if this issue still exist?
> 
> According to qemu cli in comment 8. Re-tested this bug with
> qemu-kvm-rhev-2.1.0-1.el7ev.preview.x86_64. 
> 
> result:
> qemu-kvm core dump. 
> 
> (gdb) bt
> #0  0x00007fa307073989 in raise () from /usr/lib64/libc.so.6
> #1  0x00007fa307075098 in abort () from /usr/lib64/libc.so.6
> #2  0x00007fa30706c8f6 in __assert_fail_base () from /usr/lib64/libc.so.6
> #3  0x00007fa30706c9a2 in __assert_fail () from /usr/lib64/libc.so.6
> #4  0x00007fa30cca0c8d in memory_region_del_eventfd
> (mr=mr@entry=0x7fa31039cff8, addr=addr@entry=16, size=size@entry=2, 
>     match_data=match_data@entry=true, data=data@entry=0,
> e=e@entry=0x7fa31035ddd0) at /usr/src/debug/qemu-2.1.0/memory.c:1614
> #5  0x00007fa30ce3dad9 in virtio_pci_set_host_notifier_internal
> (proxy=0x7fa31039c7b0, n=0, assign=<optimized out>, 
>     set_handler=<optimized out>) at hw/virtio/virtio-pci.c:202
> #6  0x00007fa30ccc9761 in vhost_dev_disable_notifiers
> (hdev=hdev@entry=0x7fa30f11ac30, vdev=vdev@entry=0x7fa31039d188)
>     at /usr/src/debug/qemu-2.1.0/hw/virtio/vhost.c:964
> #7  0x00007fa30ccc126b in vhost_net_start_one (vq_index=0,
> dev=0x7fa31039d188, net=0x7fa30f11ac30)
>     at /usr/src/debug/qemu-2.1.0/hw/net/vhost_net.c:249
> #8  vhost_net_start (dev=dev@entry=0x7fa31039d188, ncs=0x7fa310284710,
> total_queues=total_queues@entry=1)
>     at /usr/src/debug/qemu-2.1.0/hw/net/vhost_net.c:312
> #9  0x00007fa30ccbd67d in virtio_net_vhost_status (status=7 '\a',
> n=0x7fa31039d188)
>     at /usr/src/debug/qemu-2.1.0/hw/net/virtio-net.c:133
> #10 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at
> /usr/src/debug/qemu-2.1.0/hw/net/virtio-net.c:152
> #11 0x00007fa30ccc5558 in virtio_set_status (vdev=vdev@entry=0x7fa31039d188,
> val=val@entry=7 '\a')
>     at /usr/src/debug/qemu-2.1.0/hw/virtio/virtio.c:550
> #12 0x00007fa30ce3e870 in virtio_ioport_write (val=7, addr=<optimized out>,
> opaque=0x7fa31039c7b0) at hw/virtio/virtio-pci.c:306
> #13 virtio_pci_config_write (opaque=0x7fa31039c7b0, addr=<optimized out>,
> val=7, size=<optimized out>) at hw/virtio/virtio-pci.c:430
> #14 0x00007fa30cc9cf4a in access_with_adjusted_size (addr=addr@entry=18,
> value=value@entry=0x7fa2fe785af0, size=size@entry=1, 
>     access_size_min=<optimized out>, access_size_max=<optimized out>,
> access=0x7fa30cc9d0c0 <memory_region_write_accessor>, 
>     mr=0x7fa31039cff8) at /usr/src/debug/qemu-2.1.0/memory.c:481
> #15 0x00007fa30cca1b17 in memory_region_dispatch_write (size=1, data=7,
> addr=18, mr=0x7fa31039cff8)
>     at /usr/src/debug/qemu-2.1.0/memory.c:1143
> #16 io_mem_write (mr=mr@entry=0x7fa31039cff8, addr=18, val=<optimized out>,
> size=1) at /usr/src/debug/qemu-2.1.0/memory.c:1976
> #17 0x00007fa30cc6ce93 in address_space_rw (as=0x7fa30d2fb900
> <address_space_io>, addr=addr@entry=49810, 
>     buf=0x7fa30cb93000 <Address 0x7fa30cb93000 out of bounds>,
> len=len@entry=1, is_write=is_write@entry=true)
>     at /usr/src/debug/qemu-2.1.0/exec.c:2054
> #18 0x00007fa30cc9c3d0 in kvm_handle_io (count=1, size=1,
> direction=<optimized out>, data=<optimized out>, port=49810)
> ---Type <return> to continue, or q <return> to quit---
>     at /usr/src/debug/qemu-2.1.0/kvm-all.c:1600
> #19 kvm_cpu_exec (cpu=cpu@entry=0x7fa3100ed950) at
> /usr/src/debug/qemu-2.1.0/kvm-all.c:1737
> #20 0x00007fa30cc8b582 in qemu_kvm_cpu_thread_fn (arg=0x7fa3100ed950) at
> /usr/src/debug/qemu-2.1.0/cpus.c:874
> #21 0x00007fa30b741df3 in start_thread () from /usr/lib64/libpthread.so.0
> #22 0x00007fa3071343dd in clone () from /usr/lib64/libc.so.6
> 
> Is this a new issue for upstream qemu?  Do QE need to file another bug to
> track it for upstream qemu?

Please open a bug for qemu-kvm-rhev.

Again, please test upstream qemu.git (I mean you need compile it yourself from git://git.qemu.org/qemu.git).

Comment 27 FuXiangChun 2014-07-16 05:13:04 UTC
About qemu core dump, QE filed a new bug 1119707 to track it.

Comment 29 Enchara Ananthamurthy 2015-03-03 12:54:20 UTC
I got the below mentioned syslog message very frequently in my setup and after sometime set up went unreachable.
 
localhost(config)# 
Message from syslogd@localhost at Mar  3 14:16:31 ...
kernel:BUG: soft lockup - CPU#3 stuck for 22s! [flow_dumper:14429]
 
Message from syslogd@localhost at Mar  3 14:16:31 ...
kernel:BUG: soft lockup - CPU#12 stuck for 23s! [migration/12:133]
 
Message from syslogd@localhost at Mar  3 14:16:59 ...
kernel:BUG: soft lockup - CPU#3 stuck for 23s! [flow_dumper:14429]
 
Message from syslogd@localhost at Mar  3 14:16:59 ...
kernel:BUG: soft lockup - CPU#12 stuck for 23s! [migration/12:133]

Comment 35 jason wang 2017-11-17 06:58:35 UTC
Defer to 7.6 or even consider this for RHEL8.

Thanks