Bug 1115658 - kworker/0:0 using 100% cpu
Summary: kworker/0:0 using 100% cpu
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 24
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-02 20:59 UTC by Rafael Ávila de Espíndola
Modified: 2017-10-22 22:02 UTC (History)
19 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-04-28 17:15:38 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
acpidump (1.67 MB, text/plain)
2014-09-16 13:45 UTC, Rafael Ávila de Espíndola
no flags Details
Kworker trace when consuming 100% CPU on Fedora 23 (220.92 KB, text/plain)
2016-02-08 08:50 UTC, cmilsted
no flags Details

Description Rafael Ávila de Espíndola 2014-07-02 20:59:12 UTC
Description of problem:
From time to time  kworker/0:0 starts using 100% cpu.

Version-Release number of selected component (if applicable):


3.14.9-200.fc20.x86_64

How reproducible:

Always. Use the systems for a few hours.

echo workqueue:workqueue_queue_work > /sys/kernel/debug/tracing/set_event
cat /sys/kernel/debug/tracing/trace_pipe  > out.txt
grep $PID out.txt shows :

     kworker/0:0-25860 [000] d... 26390.246632: workqueue_queue_work: work struct=ffff88005fc79290 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d... 26390.347255: workqueue_queue_work: work struct=ffff88005fc79190 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d... 26390.347257: workqueue_queue_work: work struct=ffff88005fc79350 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d.h. 26390.360065: workqueue_queue_work: work struct=ffffffff81ca6b60 function=console_callback workqueue=ffff88103ec08a00 req_cpu=1024 cpu=0
     kworker/0:0-25860 [000] d... 26390.447927: workqueue_queue_work: work struct=ffff88005fc79190 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d... 26390.447929: workqueue_queue_work: work struct=ffff88005fc79290 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d... 26390.548544: workqueue_queue_work: work struct=ffff88005fc79190 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d... 26390.548545: workqueue_queue_work: work struct=ffff88005fc79350 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d... 26390.649104: workqueue_queue_work: work struct=ffff88005fc79190 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d... 26390.649105: workqueue_queue_work: work struct=ffff88005fc79290 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d.s. 26390.721265: workqueue_queue_work: work struct=ffff88103f2105a0 function=vmstat_update workqueue=ffff88103ec08a00 req_cpu=1024 cpu=0
     kworker/0:0-25860 [000] d... 26390.749653: workqueue_queue_work: work struct=ffff88005fc79190 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d... 26390.749667: workqueue_queue_work: work struct=ffff88005fc79350 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d... 26390.850315: workqueue_queue_work: work struct=ffff88005fc79190 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d... 26390.850317: workqueue_queue_work: work struct=ffff88005fc79290 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d... 26390.950914: workqueue_queue_work: work struct=ffff88005fc79190 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d... 26390.950915: workqueue_queue_work: work struct=ffff88005fc79350 function=acpi_os_execute_deferred workqueue=ffff881027f01600 req_cpu=0 cpu=0
     kworker/0:0-25860 [000] d.s. 26390.983943: workqueue_queue_work: work struct=ffff881028729120 function=cfq_kick_queue workqueue=ffff881027f01000 req_cpu=1024 cpu=0

Comment 1 Rafael Ávila de Espíndola 2014-09-16 13:35:48 UTC
Doing

$ grep enabled /sys/firmware/acpi/interrupts/*

get the number with the highest could and then

echo disable > /sys/firmware/acpi/interrupts/gpe16

"solved" the problem.

This might be https://bugzilla.kernel.org/show_bug.cgi?id=53071

Comment 2 Rafael Ávila de Espíndola 2014-09-16 13:45:00 UTC
Created attachment 938034 [details]
acpidump

Comment 3 Justin M. Forbes 2014-11-13 15:56:23 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 20 kernel bugs.

Fedora 20 has now been rebased to 3.17.2-200.fc20.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you have moved on to Fedora 21, and are still experiencing this issue, please change the version to Fedora 21.

If you experience different issues, please open a new bug report for those.

Comment 4 Rafael Ávila de Espíndola 2014-11-13 18:06:19 UTC
Exactly the same bug in 3.17.2-200.fc20.x86_64.

Running

echo disable > /sys/firmware/acpi/interrupts/gpe16

Still "solves" the problem.

Comment 5 Miroslav Lichvar 2014-12-25 21:52:50 UTC
I have a similar problem on Acer AO756, except it shows immediately after boot, 3.17.2-200.fc20 was the last kernel that didn't have this problem and trying to disable the interrupt by the echo command to the file with highest count doesn't work.

Comment 6 Josh Cogliati 2015-01-20 14:41:07 UTC
I have a cylinder Mac Pro Model A1481, and I have a similar problem.

# grep enabled /sys/firmware/acpi/interrupts/*
/sys/firmware/acpi/interrupts/ff_gbl_lock:       0   enabled
/sys/firmware/acpi/interrupts/ff_pwr_btn:       0   enabled
/sys/firmware/acpi/interrupts/gpe07:       0   enabled
/sys/firmware/acpi/interrupts/gpe16:   19406   enabled
/sys/firmware/acpi/interrupts/gpe17:       3   enabled
# echo disable > /sys/firmware/acpi/interrupts/gpe16

solves the problem.  So this is also in Fedora 21.

kernel-3.17.8-300.fc21.x86_64

Linux inl426935 3.17.8-300.fc21.x86_64 #1 SMP Thu Jan 8 23:32:49 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Comment 7 Miroslav Lichvar 2015-01-28 16:26:18 UTC
I'm still seeing this with kernel-3.18.3-201.fc21.x86_64.

In /sys/firmware/acpi/interrupts/gpe0D I can see the counter increasing about 10K per second and the status is switching between enabled and disabled.

Comment 8 Fedora Kernel Team 2015-02-24 16:22:31 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 20 kernel bugs.

Fedora 20 has now been rebased to 3.18.7-100.fc20.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you have moved on to Fedora 21, and are still experiencing this issue, please change the version to Fedora 21.

If you experience different issues, please open a new bug report for those.

Comment 9 Rafael Ávila de Espíndola 2015-02-25 01:48:52 UTC
This is still happening with 3.18.7-200.fc21.x86_64.

Comment 10 Miroslav Lichvar 2015-03-16 07:17:49 UTC
It seems in my case with large number of interrupts in gpe0D the problem was triggered by enabling powersaving in /sys/module/snd_hda_intel/parameters/power_save.

I'm not sure why it worked with older kernel, but removing the echo command from startup scripts that set it to 1 (which I think was added as a suggestion from powertop) seems to fix it for me.

Comment 11 cmilsted 2015-04-02 12:45:37 UTC
This is still happening on 3.19.3-200.fc21.

Running Mac book pro.

Happy to gather some debug or a trace to help with this if that helps?

cheers

Chris

Comment 12 Fedora Kernel Team 2015-04-28 18:29:39 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 21 kernel bugs.

Fedora 21 has now been rebased to 3.19.5-200.fc21.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you have moved on to Fedora 22, and are still experiencing this issue, please change the version to Fedora 22.

If you experience different issues, please open a new bug report for those.

Comment 13 Rafael Ávila de Espíndola 2015-04-30 19:57:24 UTC
The bug is still present in 3.19.5-200.fc21.x86_64.

Comment 14 cmilsted 2015-06-16 11:46:01 UTC
I am still getting this intermittently (happened this morning) on 

Linux cmilsted.fedora 4.0.4-301.fc22.x86_64 #1 SMP Thu May 21 13:10:33 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Is there any more debug I can gather to help diagnose this issue?

Many thanks

Chris

Comment 15 Rafael Ávila de Espíndola 2015-08-26 20:17:25 UTC
Looks like this was fixed. I no longer see the issue with 4.1.3-100.fc21.x86_64 on fedora 21.

Comment 16 André Martins 2015-09-13 00:20:18 UTC
I am too getting this bug.
Linux aanm-MBP 4.1.6-201.fc22.x86_64 #1 SMP Fri Sep 4 17:49:24 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Comment 17 Josh Cogliati 2015-09-16 15:28:10 UTC
At least on my computer this is fixed.  I no longer see this with 4.1.6-100.fc21.x86_64

$ grep enabled /sys/firmware/acpi/interrupts/*
/sys/firmware/acpi/interrupts/ff_gbl_lock:       0   enabled
/sys/firmware/acpi/interrupts/ff_pwr_btn:       0   enabled
/sys/firmware/acpi/interrupts/gpe07:       0   enabled
/sys/firmware/acpi/interrupts/gpe16:       1   enabled
/sys/firmware/acpi/interrupts/gpe17:       3   enabled

Comment 18 André Martins 2015-09-16 15:29:47 UTC
I had to disable the gpe16 and it solve it. My laptop is a MBP if that helps.

Comment 19 Justin M. Forbes 2015-10-20 19:21:39 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 22 kernel bugs.

Fedora 22 has now been rebased to 4.2.3-200.fc22.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you have moved on to Fedora 23, and are still experiencing this issue, please change the version to Fedora 23.

If you experience different issues, please open a new bug report for those.

Comment 20 cmilsted 2015-10-26 16:30:57 UTC
Yes - I am still suffereing from this and my kernel is now:

4.2.3-200.fc22.x86_64 #1 SMP Thu Oct 8 03:23:55 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

The fix is still:

echo disable > /sys/firmware/acpi/interrupts/gpe16

It can be one of a number of kworker threads. Is there a way to debug the kworker thread to see what the cause of this is please?

Thanks

Chris

Comment 21 rfwebster 2016-02-07 12:32:42 UTC
This is happening for me 4.3.4-300.fc23.x86_64 on a Dell Inspiron 5555 with an an AMD A8-7410.

Have tried "grep enabled /sys/firmware/acpi/interrupts/*":

/sys/firmware/acpi/interrupts/ff_gbl_lock:       0   enabled
/sys/firmware/acpi/interrupts/ff_pwr_btn:       0   enabled
/sys/firmware/acpi/interrupts/gpe03:      52   enabled
/sys/firmware/acpi/interrupts/gpe16:       0   enabled


can't disable 03 (even with su)

system is usually fine for the first hour or so but then kworker 100% kicks in and cripples battery life.

Comment 22 rfwebster 2016-02-07 12:39:09 UTC
So, after posting that I did some further reading:
I changed which usb my mouse was plugged into [1] and 100% kworker automagically went away. 

??

[1] Post #29 https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1488426

Comment 23 cmilsted 2016-02-08 08:48:58 UTC
I am not using any USB mouse, I still get this regularly.

It is happening in fact now:

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                   
 6411 root      20   0       0      0      0 R 100.0  0.0   0:30.57 kworker/0:3

Looking at the interrupts:

# grep enabled /sys/firmware/acpi/interrupts/*
/sys/firmware/acpi/interrupts/ff_gbl_lock:       0   enabled
/sys/firmware/acpi/interrupts/ff_pwr_btn:       0   enabled
/sys/firmware/acpi/interrupts/gpe06:       0   enabled
/sys/firmware/acpi/interrupts/gpe07:       0   enabled
/sys/firmware/acpi/interrupts/gpe16:    6525   enabled
/sys/firmware/acpi/interrupts/gpe17:    2504   enabled
/sys/firmware/acpi/interrupts/gpe23:       0   enabled

So once again gpe16 seems to be the cause and indeed disabling this drops the kworker thread.

I found the following lkml post so I tried to gather some more debug:

https://lkml.org/lkml/2011/3/31/144

Attaching the out.txt and from a quick analysis - I am really confused as it seems to be XFS or LUKS related when I look at the functions dominating that worker thread:

# cat out.txt |awk '{print $8}' |uniq -c
      1 function=flush_to_ldisc
      1 function=console_callback
      6 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      9 function=kcryptd_crypt
      1 function=cfq_kick_queue
      1 function=kcryptd_crypt
      1 function=cfq_kick_queue
      2 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      1 function=os_execute_work_item
      1 function=console_callback
      1 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      4 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      1 function=wb_workfn
      1 function=kcryptd_crypt
      1 function=cfq_kick_queue
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      4 function=vmstat_update
      3 function=acpi_os_execute_deferred
      2 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      2 function=os_execute_work_item
      1 function=disk_events_workfn
      6 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      2 function=push_to_pool
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      1 function=xfs_log_worker
      1 function=xlog_cil_push_work
     60 function=kcryptd_crypt
      1 function=dm_wq_work
      1 function=blk_delay_work
      1 function=xfs_buf_ioend_work
      1 function=kcryptd_crypt
     42 function=xfs_buf_ioend_work
      1 function=blk_delay_work
      1 function=scsi_requeue_run_queue
      1 function=xfs_buf_ioend_work
      1 function=scsi_requeue_run_queue
      3 function=xfs_buf_ioend_work
      1 function=scsi_requeue_run_queue
      1 function=xfs_buf_ioend_work
      1 function=scsi_requeue_run_queue
      1 function=xfs_buf_ioend_work
      1 function=scsi_requeue_run_queue
      1 function=xfs_buf_ioend_work
      1 function=scsi_requeue_run_queue
      1 function=xfs_buf_ioend_work
      1 function=scsi_requeue_run_queue
      1 function=xfs_buf_ioend_work
      1 function=scsi_requeue_run_queue
      2 function=xfs_buf_ioend_work
      1 function=scsi_requeue_run_queue
      1 function=xfs_buf_ioend_work
      1 function=scsi_requeue_run_queue
      1 function=xfs_buf_ioend_work
      1 function=scsi_requeue_run_queue
      2 function=xfs_buf_ioend_work
      1 function=scsi_requeue_run_queue
      2 function=xfs_buf_ioend_work
      1 function=cfq_kick_queue
      1 function=xfs_buf_ioend_work
      1 function=blk_delay_work
      6 function=acpi_os_execute_deferred
      4 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=os_execute_work_item
      1 function=mei_timer
      3 function=acpi_os_execute_deferred
      6 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=xfs_reclaim_worker
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=kcryptd_crypt
      1 function=xfs_end_io
      1 function=cfq_kick_queue
      1 function=xlog_cil_push_work
      1 function=wb_workfn
      1 function=dm_wq_work
      1 function=blk_delay_work
      1 function=kcryptd_crypt
      1 function=blk_delay_work
      1 function=xfs_buf_ioend_work
      1 function=blk_delay_work
      1 function=kcryptd_crypt
      1 function=xfs_end_io
      1 function=cfq_kick_queue
      1 function=xlog_cil_push_work
      1 function=dm_wq_work
      1 function=blk_delay_work
      1 function=kcryptd_crypt
      1 function=blk_delay_work
      1 function=xfs_buf_ioend_work
      1 function=blk_delay_work
      1 function=kcryptd_crypt
      1 function=cfq_kick_queue
      1 function=blk_delay_work
      1 function=xlog_cil_push_work
      1 function=kcryptd_crypt
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      1 function=kcryptd_crypt
      4 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=dm_wq_work
      1 function=blk_delay_work
      1 function=xfs_end_io
      1 function=xlog_cil_push_work
      1 function=xfs_end_io
      1 function=cfq_kick_queue
      1 function=xlog_cil_push_work
      1 function=kcryptd_crypt
      3 function=acpi_os_execute_deferred
      1 function=blk_delay_work
      1 function=vmstat_update
      1 function=xfs_buf_ioend_work
      1 function=blk_delay_work
      5 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=disk_events_workfn
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=dm_wq_work
      1 function=blk_delay_work
      1 function=kcryptd_crypt
      1 function=blk_delay_work
      1 function=xfs_buf_ioend_work
      1 function=blk_delay_work
      3 function=acpi_os_execute_deferred
      4 function=os_execute_work_item
      2 function=push_to_pool
      6 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=mei_timer
      6 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=kcryptd_crypt
      1 function=cfq_kick_queue
     10 function=kcryptd_crypt
      1 function=cfq_kick_queue
     67 function=kcryptd_crypt
      1 function=cfq_kick_queue
     39 function=kcryptd_crypt
      1 function=os_execute_work_item
      4 function=kcryptd_crypt
      1 function=cgroup_release_agent
      1 function=ioc_release_fn
      1 function=call_usermodehelper_exec_work
      3 function=acpi_os_execute_deferred
      1 function=os_execute_work_item
      1 function=neigh_periodic_work
      3 function=acpi_os_execute_deferred
      1 function=css_release_work_fn
      1 function=css_killed_work_fn
      1 function=css_release_work_fn
      4 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      2 function=css_free_work_fn
      1 function=cgroup_pidlist_destroy_work_fn
      1 function=css_release_work_fn
      1 function=work_fn
      2 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      1 function=css_free_work_fn
      6 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=disk_events_workfn
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      1 function=cgroup_pidlist_destroy_work_fn
      1 function=os_execute_work_item
      2 function=cgroup_pidlist_destroy_work_fn
      1 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      4 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      1 function=wb_workfn
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=mei_timer
      2 function=vmstat_update
      3 function=acpi_os_execute_deferred
      3 function=vmstat_update
      4 function=kcryptd_crypt
      1 function=cfq_kick_queue
      1 function=kcryptd_crypt
      1 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      2 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      1 function=os_execute_work_item
      1 function=console_callback
      1 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      1 function=console_callback
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      3 function=acpi_os_execute_deferred
      4 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      2 function=console_callback
      1 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=os_execute_work_item
      1 function=console_callback
      1 function=neigh_periodic_work
      1 function=console_callback
      3 function=acpi_os_execute_deferred
      4 function=vmstat_update
      1 function=do_cache_clean
      2 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=disk_events_workfn
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      2 function=flush_to_ldisc
      1 function=wb_workfn
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      2 function=flush_to_ldisc
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      3 function=push_to_pool
      1 function=console_callback
      2 function=flush_to_ldisc
      1 function=console_callback
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      4 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      2 function=flush_to_ldisc
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=mei_timer
      1 function=console_callback
      2 function=flush_to_ldisc
      3 function=acpi_os_execute_deferred
      6 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      1 function=console_callback
      2 function=os_execute_work_item
      1 function=console_callback
      2 function=flush_to_ldisc
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      2 function=flush_to_ldisc
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      1 function=os_execute_work_item
      1 function=console_callback
      1 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=kcryptd_crypt
      1 function=xfs_end_io
      1 function=cfq_kick_queue
      1 function=xlog_cil_push_work
      1 function=dm_wq_work
      1 function=blk_delay_work
      1 function=kcryptd_crypt
      1 function=blk_delay_work
      1 function=xfs_buf_ioend_work
      1 function=blk_delay_work
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      2 function=flush_to_ldisc
      2 function=os_execute_work_item
      1 function=console_callback
      3 function=acpi_os_execute_deferred
      1 function=xfs_reclaim_worker
      3 function=acpi_os_execute_deferred
      4 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      6 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      1 function=kcryptd_crypt
      1 function=xfs_end_io
      1 function=cfq_kick_queue
      1 function=xlog_cil_push_work
      1 function=dm_wq_work
      1 function=blk_delay_work
      1 function=kcryptd_crypt
      1 function=blk_delay_work
      1 function=xfs_buf_ioend_work
      1 function=blk_delay_work
      1 function=kcryptd_crypt
      1 function=cfq_kick_queue
      1 function=xlog_cil_push_work
      1 function=dm_wq_work
      1 function=blk_delay_work
      1 function=kcryptd_crypt
      1 function=blk_delay_work
      1 function=xfs_buf_ioend_work
      1 function=blk_delay_work
      3 function=kcryptd_crypt
      1 function=cfq_kick_queue
      1 function=blk_delay_work
      1 function=xlog_cil_push_work
      1 function=disk_events_workfn
      6 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      1 function=console_callback
      2 function=flush_to_ldisc
      3 function=acpi_os_execute_deferred
      2 function=console_callback
      2 function=flush_to_ldisc
      1 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=os_execute_work_item
      1 function=console_callback
      3 function=acpi_os_execute_deferred
      4 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=dm_wq_work
      1 function=blk_delay_work
      1 function=kcryptd_crypt
      1 function=blk_delay_work
      1 function=xfs_buf_ioend_work
      1 function=blk_delay_work
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=mei_timer
      3 function=acpi_os_execute_deferred
      6 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      1 function=console_callback
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      2 function=push_to_pool
      2 function=flush_to_ldisc
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      3 function=acpi_os_execute_deferred
      1 function=os_execute_work_item
      1 function=console_callback
      1 function=os_execute_work_item
      1 function=console_callback
      6 function=acpi_os_execute_deferred
      4 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      1 function=wb_workfn
     77 function=kcryptd_crypt
      1 function=cfq_kick_queue
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      4 function=vmstat_update
      3 function=acpi_os_execute_deferred
      2 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      2 function=os_execute_work_item
      1 function=disk_events_workfn
      6 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      2 function=flush_to_ldisc
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      1 function=console_callback
      6 function=acpi_os_execute_deferred
      4 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=os_execute_work_item
      1 function=mei_timer
      3 function=acpi_os_execute_deferred
      6 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      4 function=kcryptd_crypt
      1 function=cfq_kick_queue
      1 function=kcryptd_crypt
      1 function=cfq_kick_queue
      2 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=wb_workfn
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      4 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      6 function=acpi_os_execute_deferred
      6 function=vmstat_update
      1 function=vmstat_shepherd
      2 function=vmstat_update
      2 function=os_execute_work_item
      1 function=disk_events_workfn
      6 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      3 function=acpi_os_execute_deferred
      2 function=os_execute_work_item
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      1 function=xfs_reclaim_worker
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      4 function=os_execute_work_item
      2 function=console_callback
      3 function=acpi_os_execute_deferred
      4 function=console_callback
      3 function=acpi_os_execute_deferred
      1 function=console_callback
      1 function=os_execute_work_item

I did try the second debug option and look at the stack of process 6411 in my case which was kworker 0:3

# cat stack
[<ffffffffffffffff>] 0xffffffffffffffff


Not sure that helps.

If anyone has any other suggestions I am willing to try and gather some more debug when this next intermittent issue arises again.

Thanks

Chris

Comment 24 cmilsted 2016-02-08 08:50:50 UTC
Created attachment 1122084 [details]
Kworker trace when consuming 100% CPU on Fedora 23

I also updated the bug to Fedora 23 as this is still an issue for me on F23:

# cat /etc/fedora-release 
Fedora release 23 (Twenty Three)


Thanks

Chris

Comment 25 cmilsted 2016-02-15 09:22:15 UTC
Still happening and I made sure I have run an update so this is the latest kernel:

4.3.5-300.fc23.x86_64

Still running on the same MBP.

System Information
        Manufacturer: Apple Inc.
        Product Name: MacBookPro11,3

As this had some strange XFS/LUKS information just also grabbing my drive info:

$ sudo hdparm -I /dev/sda

/dev/sda:

ATA device, with non-removable media
	Model Number:       APPLE SSD SM0512F                       
	Serial Number:      XXXX      
	Firmware Revision:  UXM2JA1Q
	Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0
Standards:
	Used: unknown (minor revision code 0x0039) 
	Supported: 8 7 6 5 
	Likely used: 8
Configuration:
	Logical		max	current
	cylinders	16383	16383
	heads		16	16
	sectors/track	63	63
	--
	CHS current addressable sectors:   16514064
	LBA    user addressable sectors:  268435455
	LBA48  user addressable sectors:  977105060
	Logical  Sector size:                   512 bytes
	Physical Sector size:                  4096 bytes
	Logical Sector-0 offset:                  0 bytes
	device size with M = 1024*1024:      477102 MBytes
	device size with M = 1000*1000:      500277 MBytes (500 GB)
	cache/buffer size  = unknown
	Nominal Media Rotation Rate: Solid State Device
Capabilities:
	LBA, IORDY(can be disabled)
	Queue depth: 32
	Standby timer values: spec'd by Standard, no device specific minimum
	R/W multiple sector transfer: Max = 16	Current = 16
	Recommended acoustic management value: 128, current value: 0
	DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 
	     Cycle time: min=120ns recommended=120ns
	PIO: pio0 pio1 pio2 pio3 pio4 
	     Cycle time: no flow control=120ns  IORDY flow control=120ns
Commands/features:
	Enabled	Supported:
	   *	SMART feature set
	    	Security Mode feature set
	   *	Power Management feature set
	   *	Write cache
	   *	Look-ahead
	   *	Host Protected Area feature set
	   *	WRITE_BUFFER command
	   *	READ_BUFFER command
	   *	NOP cmd
	   *	DOWNLOAD_MICROCODE
	   *	SET_MAX security extension
	    	Automatic Acoustic Management feature set
	   *	48-bit Address feature set
	   *	Device Configuration Overlay feature set
	   *	Mandatory FLUSH_CACHE
	   *	FLUSH_CACHE_EXT
	   *	SMART error logging
	   *	SMART self-test
	   *	General Purpose Logging feature set
	   *	WRITE_{DMA|MULTIPLE}_FUA_EXT
	   *	64-bit World wide name
	   *	{READ,WRITE}_DMA_EXT_GPL commands
	   *	Segmented DOWNLOAD_MICROCODE
	   *	Gen1 signaling speed (1.5Gb/s)
	   *	Gen2 signaling speed (3.0Gb/s)
	   *	Gen3 signaling speed (6.0Gb/s)
	   *	Native Command Queueing (NCQ)
	   *	Phy event counters
	   *	DMA Setup Auto-Activate optimization
	   *	Software settings preservation
	   *	SET MAX SETPASSWORD/UNLOCK DMA commands
	   *	WRITE BUFFER DMA command
	   *	READ BUFFER DMA command
	   *	Data Set Management TRIM supported (limit 8 blocks)
Security: 
	Master password revision code = 65534
		supported
	not	enabled
	not	locked
		frozen
	not	expired: security count
		supported: enhanced erase
	6min for SECURITY ERASE UNIT. 32min for ENHANCED SECURITY ERASE UNIT. 
Logical Unit WWN Device Identifier: 5002538655584d30
	NAA		: 5
	IEEE OUI	: 002538
	Unique ID	: 655584d30
Integrity word not set (found 0xfad0, expected 0x100a5)

Comment 26 Marek Novotny 2016-05-20 14:46:35 UTC
I experiencing similar behavior with 
4.4.9-300.fc23.x86_64+debug #1 SMP Wed May 4 23:44:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Constantly using up to 50% of every CPU

 PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                   
  407 root       0 -20       0      0      0 S  44,9  0,0 240:43.05 kworker/0:1H                                                              
  409 root       0 -20       0      0      0 R  44,5  0,0 236:21.40 kworker/3:1H                                                              
30937 root       0 -20       0      0      0 R  44,5  0,0 219:15.90 kworker/1:2H                                                              
 1881 root       0 -20       0      0      0 S  43,9  0,0 222:08.55 kworker/2:2H

Comment 27 Laura Abbott 2016-09-23 19:26:42 UTC
*********** MASS BUG UPDATE **************
 
We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 23 kernel bugs.
 
Fedora 23 has now been rebased to 4.7.4-100.fc23.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.
 
If you have moved on to Fedora 24 or 25, and are still experiencing this issue, please change the version to Fedora 24 or 25.
 
If you experience different issues, please open a new bug report for those.

Comment 28 cmilsted 2016-09-28 18:04:07 UTC
This is still being seen on 24:

$ uname -a
Linux cmilsted.fedora 4.7.3-200.fc24.x86_64 #1 SMP Wed Sep 7 17:31:21 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[cmilsted@cmilsted ~]$ top

top - 19:03:03 up 2 min,  1 user,  load average: 3.73, 1.68, 0.63
Tasks: 365 total,   2 running, 363 sleeping,   0 stopped,   0 zombie
%Cpu(s):  5.5 us, 13.7 sy,  0.0 ni, 79.3 id,  1.4 wa,  0.0 hi,  0.1 si,  0.0 st
KiB Mem : 16338992 total, 10686304 free,  3859716 used,  1792972 buff/cache
KiB Swap:  8191996 total,  8191996 free,        0 used. 12005400 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND     
    4 root      20   0       0      0      0 R  99.7  0.0   1:17.37 kworker/0:0 
 4762 cmilsted  20   0 1022292 128724  67532 S  19.3  0.8   0:01.31 chrome      
 3556 cmilsted  20   0  799180 269180 104456 S  13.0  1.6   0:05.50 chrome      
 3456 cmilsted  20   0 1303320 290276 138528 S  10.6  1.8   0:15.18 chrome      
 4017 cmilsted  20   0 1266056 237428  61596 S   4.0  1.5   0:07.40 chrome      
 2765 cmilsted  20   0 2270080 282772  94756 S   3.3  1.7   0:10.72 gnome-shell 
 4101 cmilsted  20   0 1154600 183904  60768 S   2.0  1.1   0:03.40 chrome      
 4188 cmilsted  20   0 1184776 174532  59120 S   2.0  1.1   0:03.07 chrome      
 2663 root      20   0  225288  70072  42188 S   1.7  0.4   0:05.47 Xorg        
 3632 cmilsted  20   0  959396  79424  49212 S   1.3  0.5   0:01.25 chrome      
 3623 cmilsted  20   0 1077628 145672  55420 S   0.7  0.9   0:08.26 chrome      
 2948 cmilsted  20   0  496792  84804  32520 S   0.3  0.5   0:01.52 SpiderOakB+ 
 3650 cmilsted  20   0  959396  86312  52088 S   0.3  0.5   0:01.00 chrome      
 3821 cmilsted  20   0 1112928 142140  62812 S   0.3  0.9   0:01.60 chrome      

Still an issue

Comment 29 Hans de Goede 2016-11-04 19:17:40 UTC
cmilsted, this link has some good hints how to figure out what is actually burning CPU under the kworker thread dispatches, once we know that we can further debug this:

http://askubuntu.com/a/421916

Comment 30 cmilsted 2016-11-10 16:40:11 UTC
Hi Hans,

I did this and I have a perf trace now to look at.

High level:

+   95.18%     0.00%  kworker/0:2      [kernel.kallsyms]                [k] process_one_work
+   95.18%     0.00%  kworker/0:2      [kernel.kallsyms]                [k] worker_thread
+   95.18%     0.00%  kworker/0:2      [kernel.kallsyms]                [k] kthread
+   95.18%     0.00%  kworker/0:2      [kernel.kallsyms]                [k] ret_from_fork
+   95.17%     0.00%  kworker/0:2      [kernel.kallsyms]                [k] acpi_os_execute_deferred
+   95.16%     0.00%  kworker/0:2      [kernel.kallsyms]                [k] acpi_ev_notify_dispatch
+   95.16%     0.00%  kworker/0:2      [apple_gmux]                     [k] gmux_notify_handler
+   95.10%     8.46%  kworker/0:2      [apple_gmux]                     [k] gmux_index_wait_ready.isra.8
+   86.64%     0.00%  kworker/0:2      [kernel.kallsyms]                [k] __const_udelay
+   86.56%    86.56%  kworker/0:2      [kernel.kallsyms]                [k] delay_tsc
+   57.04%     0.02%  kworker/0:2      [apple_gmux]                     [k] gmux_index_write8
+   38.12%     0.00%  kworker/0:2      [apple_gmux]                     [k] gmux_index_read8


Diving down one of these trees, it looks like gmux_index_wait_ready.isra.8 seems to be the sticking point:

-   86.56%    86.56%  kworker/0:2      [kernel.kallsyms]                [k] delay_tsc                   ▒
     ret_from_fork                                                                                      ▒
     kthread                                                                                            ▒
     worker_thread                                                                                      ▒
     process_one_work                                                                                   ▒
     acpi_os_execute_deferred                                                                           ▒
     acpi_ev_notify_dispatch                                                                            ▒
   - gmux_notify_handler                                                                                ▒
      - 51.85% gmux_index_write8                                                                        ▒
           gmux_index_wait_ready.isra.8                                                                 ▒
           __const_udelay                                                                               ▒
           delay_tsc                                                                                    ▒
      - 34.71% gmux_index_read8                                                                         ▒
         + gmux_index_wait_ready.isra.8 

This looks to match other bugs and experiences:

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1321824

When I look at the annotation I see the following:

gmux_index_wait_ready.isra.8  /lib/modules/4.7.7-200.fc24.x86_64/kernel/drivers/platform/x86/apple-gmux.k
       │                                                                                                ◆
       │    /tmp/perf-kmod-I6oORg:     file format elf64-x86-64                                         ▒
       │                                                                                                ▒
       │                                                                                                ▒
       │    Disassembly of section .text:                                                               ▒
       │                                                                                                ▒
       │    0000000000000030 <gmux_index_wait_ready.isra.8>:                                            ▒
       │    gmux_index_wait_ready.isra.8():                                                             ▒
       │    → callq  gmux_index_wait_ready.isra.8+0x5                                                   ▒
       │      push   %rbp                                                                               ▒
       │      mov    %rsp,%rbp                                                                          ▒
       │      push   %r13                                                                               ▒
       │      push   %r12                                                                               ▒
       │      push   %rbx                                                                               ▒
       │      mov    (%rdi),%eax                                                                        ▒
       │      mov    %rdi,%r13                                                                          ▒
       │      lea    0xd4(%rax),%edx                                                                    ▒
  0.08 │      in     (%dx),%al                                                                          ▒
       │      mov    $0xc8,%r12d                                                                        ▒
       │      mov    %eax,%ebx                                                                          ▒
       │    ↓ jmp    48                                                                                 ▒
  0.03 │24:   mov    0x0(%r13),%rcx                                                                     ▒
       │      lea    0xd0(%rcx),%edx                                                                    ▒
 92.43 │      in     (%dx),%al                                                                          ▒
  0.34 │      lea    0xd4(%rcx),%edx                                                                    ▒
  6.81 │      in     (%dx),%al                                                                          ▒
  0.20 │      mov    $0x68dbc,%edi                                                                      ▒
       │      mov    %eax,%ebx                                                                          ▒
  0.08 │    → callq  gmux_index_wait_ready.isra.8+0x42                                                  ▒
  0.03 │      sub    $0x1,%r12d                                                                         ▒
       │    ↓ je     53                                                                                 ▒
       │48:   and    $0x1,%ebx                                                                          ▒
       │    ↑ jne    24                                                                                 ▒
       │      mov    $0x1,%r12d                                                                         ▒
       │53:   pop    %rbx                                                                               ▒
       │      mov    %r12d,%eax                                                                         ▒
       │      pop    %r12                                                                               ▒
       │      pop    %r13                                                                               ▒
       │      pop    %rbp                                                                               ▒
       │    ← retq                      


So I suspect this may be the gmux stuff?

I can provide the trace if anyone can read this much better than I can.

Many thanks

Comment 31 Hans de Goede 2016-11-10 16:51:29 UTC
Ok, so it seems that the real problem here is that acpi_ev_notify_dispatch keeps getting called, which suggests that some interrupt from the gmux to the acpi-ec (acpi-embedded-control) is not being cleared.

So this likely is a bug in the apple gmux driver.

Your best bet is probably to send a mail to platform-driver-x86.org, it seems that the apple-gmax driver does not have an active maintainer atm though. So I'm not sure how much help you will get there, but that is the best place to further pursue this.

Comment 32 Justin M. Forbes 2017-04-11 14:43:17 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There are a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 24 kernel bugs.

Fedora 25 has now been rebased to 4.10.9-100.fc24.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you have moved on to Fedora 26, and are still experiencing this issue, please change the version to Fedora 26.

If you experience different issues, please open a new bug report for those.

Comment 33 Justin M. Forbes 2017-04-28 17:15:38 UTC
*********** MASS BUG UPDATE **************
This bug is being closed with INSUFFICIENT_DATA as there has not been a response in 2 weeks. If you are still experiencing this issue, please reopen and attach the 
relevant data from the latest kernel you are running and any data that might have been requested previously.

Comment 34 nick.clasener 2017-10-22 21:54:26 UTC
I am suffering from this atm
Fedora26

I'm no pro, so if there is more info needed tell what commands i need to run to give the right diagnostic

OS: Fedora release 26 x86_64
Host Aspire VN7-792G 
Kernel 4.13.5-200.fc26.x86_64
CPU: Intel I7-6700HQ
GPU: NVIDIA GeForce GTX 960M
GPU: Intel HD Grapics 530
Memory: 32036MiB

cat /sys/kernel/debug/tracing/trace_pipe  > out.txt

             zsh-2746  [002] d...   137.352579: workqueue_queue_work: work struct=ffff9f0d909f3208 function=flush_to_ldisc workqueue=ffff9f0dde019600 req_cpu=8192 cpu=4294967295
             zsh-2746  [002] d...   137.352614: workqueue_queue_work: work struct=ffff9f0d909f3208 function=flush_to_ldisc workqueue=ffff9f0dde019600 req_cpu=8192 cpu=4294967295
             zsh-2746  [002] d...   137.352619: workqueue_queue_work: work struct=ffff9f0d909f3208 function=flush_to_ldisc workqueue=ffff9f0dde019600 req_cpu=8192 cpu=4294967295
             zsh-2746  [002] d...   137.352624: workqueue_queue_work: work struct=ffff9f0d909f3208 function=flush_to_ldisc workqueue=ffff9f0dde019600 req_cpu=8192 cpu=4294967295
          <idle>-0     [004] d.h.   137.352745: workqueue_queue_work: work struct=ffff9f0d76ff24d0 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.353197: workqueue_queue_work: work struct=ffff9f0d76ff2610 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [005] d.h.   137.353207: workqueue_queue_work: work struct=ffff9f0dd4ad4a38 function=gen6_pm_rps_work [i915] workqueue=ffff9f0dde019000 req_cpu=8192 cpu=5
          <idle>-0     [001] d.h.   137.354772: workqueue_queue_work: work struct=ffff9f0d78525ed0 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [001] d.h.   137.355278: workqueue_queue_work: work struct=ffff9f0d78525390 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [003] d.h.   137.355912: workqueue_queue_work: work struct=ffff9f0d76fdd090 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.356110: workqueue_queue_work: work struct=ffff9f0dd4ad4a38 function=gen6_pm_rps_work [i915] workqueue=ffff9f0dde019000 req_cpu=8192 cpu=4
          <idle>-0     [003] d.s.   137.358429: workqueue_queue_work: work struct=ffff9f0dd4ad3e60 function=__i915_gem_free_work [i915] workqueue=ffff9f0dde019000 req_cpu=8192 cpu=3
          <idle>-0     [003] d.s.   137.360402: workqueue_queue_work: work struct=ffffffff81f203e0 function=delayed_fput workqueue=ffff9f0dde019000 req_cpu=8192 cpu=3
          <idle>-0     [004] d.h.   137.360512: workqueue_queue_work: work struct=ffff9f0d76ff2a50 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.366742: workqueue_queue_work: work struct=ffff9f0dd4ad4a38 function=gen6_pm_rps_work [i915] workqueue=ffff9f0dde019000 req_cpu=8192 cpu=4
          <idle>-0     [004] d.s.   137.366924: workqueue_queue_work: work struct=ffff9f0dd4b00ee8 function=ieee80211_iface_work [mac80211] workqueue=ffff9f0dd59e6e00 req_cpu=8192 cpu=4294967295
         compton-2274  [001] d...   137.367932: workqueue_queue_work: work struct=ffff9f0dd4ad4a38 function=gen6_pm_rps_work [i915] workqueue=ffff9f0dde019000 req_cpu=8192 cpu=1
          <idle>-0     [001] d.h.   137.369101: workqueue_queue_work: work struct=ffff9f0d78525050 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [001] d.h.   137.369673: workqueue_queue_work: work struct=ffff9f0d785251d0 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [001] d.h.   137.375073: workqueue_queue_work: work struct=ffff9f0d78525150 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.375803: workqueue_queue_work: work struct=ffff9f0d76ff2890 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.376460: workqueue_queue_work: work struct=ffff9f0d76ff2cd0 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.376989: workqueue_queue_work: work struct=ffff9f0d76ff2fd0 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.378376: workqueue_queue_work: work struct=ffff9f0d76ff21d0 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.381741: workqueue_queue_work: work struct=ffff9f0d76ff22d0 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.382247: workqueue_queue_work: work struct=ffff9f0d76ff26d0 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.382787: workqueue_queue_work: work struct=ffff9f0d76ff2e90 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
   kworker/u16:3-140   [004] d...   137.383713: workqueue_queue_work: work struct=ffff9f0dd4ad2d58 function=intel_fbc_work_fn [i915] workqueue=ffff9f0dde019000 req_cpu=8192 cpu=4
          <idle>-0     [004] d.h.   137.387842: workqueue_queue_work: work struct=ffff9f0d76ff2450 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.388050: workqueue_queue_work: work struct=ffff9f0d90aa6450 function=intel_atomic_commit_work [i915] workqueue=ffff9f0dde019600 req_cpu=8192 cpu=4294967295
          <idle>-0     [004] dNh.   137.388054: workqueue_queue_work: work struct=ffff9f0dd4ad49d0 function=intel_atomic_helper_free_state_worker [i915] workqueue=ffff9f0dde019000 req_cpu=8192 cpu=4
   kworker/u16:3-140   [004] d...   137.400344: workqueue_queue_work: work struct=ffff9f0dd4ad2d58 function=intel_fbc_work_fn [i915] workqueue=ffff9f0dde019000 req_cpu=8192 cpu=4
          <idle>-0     [001] d.s.   137.434362: workqueue_queue_work: work struct=ffffffffc06a52e0 function=gc_worker [nf_conntrack] workqueue=ffff9f0dde019400 req_cpu=8192 cpu=1
          <idle>-0     [004] dNh.   137.434706: workqueue_queue_work: work struct=ffffffff81f85840 function=console_callback workqueue=ffff9f0dde019000 req_cpu=8192 cpu=4
          <idle>-0     [003] d.s.   137.458401: workqueue_queue_work: work struct=ffff9f0dd4ad56a8 function=i915_gem_idle_work_handler [i915] workqueue=ffff9f0dd57a9c00 req_cpu=8192 cpu=4294967295
          <idle>-0     [004] d.h.   137.464377: workqueue_queue_work: work struct=ffff9f0d76ff2010 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.s.   137.469504: workqueue_queue_work: work struct=ffff9f0dd4b00ee8 function=ieee80211_iface_work [mac80211] workqueue=ffff9f0dd59e6e00 req_cpu=8192 cpu=4294967295
          <idle>-0     [004] d.s.   137.506365: workqueue_queue_work: work struct=ffffffff81eea680 function=sync_cmos_clock workqueue=ffff9f0dde019a00 req_cpu=8192 cpu=4
          <idle>-0     [006] d.s.   137.506405: workqueue_queue_work: work struct=ffff9f0dd3d1b400 function=mmc_rescan [mmc_core] workqueue=ffff9f0dde019800 req_cpu=8192 cpu=6
   rtsx_usb_ms_1-729   [004] d...   137.506550: workqueue_queue_work: work struct=ffff9f0dd5223d60 function=pm_runtime_work workqueue=ffff9f0dda6d2400 req_cpu=8192 cpu=4
          <idle>-0     [006] d.s.   137.557402: workqueue_queue_work: work struct=ffff9f0dd5223560 function=pm_runtime_work workqueue=ffff9f0dda6d2400 req_cpu=8192 cpu=6
     kworker/6:2-448   [006] d...   137.557409: workqueue_queue_work: work struct=ffff9f0dd498e1e8 function=pm_runtime_work workqueue=ffff9f0dda6d2400 req_cpu=8192 cpu=6
          <idle>-0     [001] d.s.   137.562399: workqueue_queue_work: work struct=ffffffffc06a52e0 function=gc_worker [nf_conntrack] workqueue=ffff9f0dde019400 req_cpu=8192 cpu=1
          <idle>-0     [004] d.h.   137.563145: workqueue_queue_work: work struct=ffff9f0d76ff2910 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.s.   137.571844: workqueue_queue_work: work struct=ffff9f0dd4b00ee8 function=ieee80211_iface_work [mac80211] workqueue=ffff9f0dd59e6e00 req_cpu=8192 cpu=4294967295
          <idle>-0     [004] d.h.   137.578473: workqueue_queue_work: work struct=ffff9f0d76ff2d90 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.587378: workqueue_queue_work: work struct=ffff9f0d76ff2b10 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.h.   137.653376: workqueue_queue_work: work struct=ffff9f0d76ff2d50 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [004] d.s.   137.674238: workqueue_queue_work: work struct=ffff9f0dd4b00ee8 function=ieee80211_iface_work [mac80211] workqueue=ffff9f0dd59e6e00 req_cpu=8192 cpu=4294967295
          <idle>-0     [001] d.s.   137.690402: workqueue_queue_work: work struct=ffffffffc06a52e0 function=gc_worker [nf_conntrack] workqueue=ffff9f0dde019400 req_cpu=8192 cpu=1
          <idle>-0     [004] d.h.   137.735378: workqueue_queue_work: work struct=ffff9f0d76ff2490 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [003] d.s.   137.762423: workqueue_queue_work: work struct=ffff9f0dd4ad5648 function=i915_gem_retire_work_handler [i915] workqueue=ffff9f0dd57a9c00 req_cpu=8192 cpu=4294967295
          <idle>-0     [004] d.s.   137.776501: workqueue_queue_work: work struct=ffff9f0dd4b00ee8 function=ieee80211_iface_work [mac80211] workqueue=ffff9f0dd59e6e00 req_cpu=8192 cpu=4294967295
          <idle>-0     [004] d.h.   137.784421: workqueue_queue_work: work struct=ffff9f0d76ff2c10 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [001] d.s.   137.818399: workqueue_queue_work: work struct=ffffffffc06a52e0 function=gc_worker [nf_conntrack] workqueue=ffff9f0dde019400 req_cpu=8192 cpu=1
          <idle>-0     [004] d.h.   137.864747: workqueue_queue_work: work struct=ffff9f0d76ff2290 function=acpi_os_execute_deferred workqueue=ffff9f0dda190800 req_cpu=0 cpu=0
          <idle>-0     [003] d.s.   137.866400: workqueue_queue_work: work

Comment 35 nick.clasener 2017-10-22 22:02:10 UTC
to add to my top post 

~ >>> grep enabled /sys/firmware/acpi/interrupts/*  
/sys/firmware/acpi/interrupts/ff_gbl_lock:       0  EN     enabled      unmasked
/sys/firmware/acpi/interrupts/ff_pwr_btn:       0  EN     enabled      unmasked
/sys/firmware/acpi/interrupts/gpe0A:       0  EN     enabled      unmasked
/sys/firmware/acpi/interrupts/gpe0C:       0  EN     enabled      unmasked
/sys/firmware/acpi/interrupts/gpe03:     299  EN     enabled      unmasked
/sys/firmware/acpi/interrupts/gpe6F:       0  EN     enabled      unmasked
/sys/firmware/acpi/interrupts/gpe61:  149392     STS enabled      unmasked
/sys/firmware/acpi/interrupts/gpe62:       0  EN     enabled      unmasked
/sys/firmware/acpi/interrupts/gpe66:       4  EN     enabled      unmasked

I've just ran it fedora with the previous kernel there i had all 8 cores on full blast...


Note You need to log in before you can comment on or make changes to this bug.