I still haven't found out how to reproduce the problem, but here are at least the symptoms: - one single cpu core is suddenly consuming 100% of power, i notice because of the alarming fan swoosh - check in the process list for the cause: "/usr/sbin/abrtd -d -s" is the culprit - do a systemctl restart abrtd.service and the thing is fixed I don't know what is causing this, it's not often but it happens. Maybe you can provide info about what logs could hint for the cause?
Thank you for taking the time report this issue. Could you please strace the misbehaving process? Output of eu-stack would help us a lot too.
Luckily, the problem occurred once again. I collected the information saved in /var/spool/abrt into a 7z archive which will be provided here. What happened: Current fedora25 kde desktop, virtualbox 5.1.14 open. Then, the screen went black :-( I opened a terminal (Ctrl-Alt-F2) as root and checked for my user processes - the whole gui (plasma) died and vbox did shut down automatically. Not sure, if it is a problem in plasma or in the combination of vbox and plasma (although vbox mentioned that in vbox 5.1.14 they solved a problem with QT... i'm back on 5.1.12 for now) So, abrtd process was running and consuming 100% of one cpu core. Whatever it was waiting for (interaction in a non existiing gui perhaps?) it did not come to an end. I restarted the abrtd service, restarted lightdm.service and re-logged in. Abrt-cli or gnome-abrt showed the provided two entries in the 7z archive. For me, i cannot say what the real cause for the blackout has been.
Created attachment 1244710 [details] abrt-spool dump
I can confirm this issue on my Fedora 25 machine. For me it happened without a black screen. I also did not have virtualbox open.
This is still an annoying issue - even with the latest fedora25 updates. I'm this far to disable abrtd completely... but this wasn't the intention of this tool, was it? :-)
Btw, the problem with the black-screens is fixed, this was a selinux problem and i resolved this by adding my own policies. The abrtd problem can happen at any time, i did not find a pattern right now.
I cannot open the attached 'abrt-spool dump'. However, I would appreciate if anyone could strace abrtd process while consuming all system resources. A regular backtrace from GDB would be helpful too.
I have the same problem on on my Fedora 25 machine. For me it happened with chrome and dolphin running. No kind of block but simply one core at 100%. Sorry I can’t give you the requested data because I read this bug after I killed The process. I’ll do it next time (it’s happened other times on my machine)
I just experienced this on Fedora 26. I hooked up strace and I just see it spamming this: poll([{fd=3, events=POLLIN|POLLPRI}, {fd=5, events=POLLIN}, {fd=6, events=POLLIN|POLLPRI}, {fd=9, events=POLLIN|POLLPRI}, {fd=14, events=POLLIN}], 5, -1) = 1 ([{fd=14, revents=POLLNVAL}])
Thank you, Diego. Would be nice if you could hook up gdb too. List of all abrt processes would be helpful too. Finally, some journal messages would be nice. Looking at the poll call, I guess several problems occurred at the same time but I am not able to reproduce the issue.
Hi! I seem to be running into a similar, if not the exact same, thing on FC25 with a Dell XPS 13 9360. It throws a lot of MCEs for thermal events for the CPU, which may be confusing abrtd. I'd supply /var/spool/abrtd but it is currently 1.2gb in size (resulting in a 28M tbz2 file). Let me know if you want it, and if so, I'll upload it somewhere. I'm attaching two files: abrtd-strace-output.txt: a file showing some strace output from abrtd in this state: a lot of that polling, and then getting notified by the oops monitor, and then iterating through every directory in /var/spool/abrt, etc abrt-processes-journal-output-and-gdb.txt: a file showing the processes that exist, the journalctl output from abrtd, and a sample of a few backtraces gathered with debug symbols. I'm not proficient with GDB enough to generate more than this without some feedback about what to do. I hope this is enough information to help you dig into this; if not, let me know what you do need. It's currently in a state where it basically reproduces on boot for me, and stays that way until I either attach gdb to it and pause abrtd, or stop the service.
Created attachment 1288993 [details] abrtd-strace-output.txt.gz (strace output, gzipped to get around file size limits)
Created attachment 1288994 [details] abrt processes running, journalctl output, and some gdb traces
(In reply to Diego Fernandez from comment #9) > I just experienced this on Fedora 26. I hooked up strace and I just see it > spamming this: > > poll([{fd=3, events=POLLIN|POLLPRI}, {fd=5, events=POLLIN}, {fd=6, > events=POLLIN|POLLPRI}, {fd=9, events=POLLIN|POLLPRI}, {fd=14, > events=POLLIN}], 5, -1) = 1 ([{fd=14, revents=POLLNVAL}]) The same thing happened to me on Fedora 26. systemctl reports: Jul 14 19:57:46 gnu-3.sc.intel.com abrt-server[19554]: Preserving oops '.' because DropNotReportableOopses is 'no' Jul 14 19:59:19 gnu-3.sc.intel.com abrt-notification[17365]: System encountered a non-fatal error in ??() Jul 14 20:03:47 gnu-3.sc.intel.com abrt-server[19558]: Can't find a meaningful backtrace for hashing in '.' Jul 14 20:03:47 gnu-3.sc.intel.com abrt-server[19558]: Option 'DropNotReportableOopses' is not configured Jul 14 20:03:47 gnu-3.sc.intel.com abrt-server[19558]: Preserving oops '.' because DropNotReportableOopses is 'no' Jul 14 20:05:20 gnu-3.sc.intel.com abrt-notification[23336]: System encountered a non-fatal error in ??() Jul 14 20:09:44 gnu-3.sc.intel.com abrt-server[19561]: Can't find a meaningful backtrace for hashing in '.' Jul 14 20:09:44 gnu-3.sc.intel.com abrt-server[19561]: Option 'DropNotReportableOopses' is not configured Jul 14 20:09:44 gnu-3.sc.intel.com abrt-server[19561]: Preserving oops '.' because DropNotReportableOopses is 'no' Jul 14 20:11:09 gnu-3.sc.intel.com abrt-notification[2433]: System encountered a non-fatal error in ??()
Just happened to my F26 machine. Reading the status, I saw several messages like this: Jul 21 18:00:15 kujegerXPS.localdomain abrt-server[21473]: Deleting problem directory ccpp-2017-07-21-17:58:56.38692-16045 (dup of ccpp-2017-07-16-18:28:22.628060-16045) Jul 21 18:00:15 kujegerXPS.localdomain abrt-notification[21842]: Process 17796 (glib-pacrunner) crashed in EncodeLatin1(js::ExclusiveContext*, JSString*)() Jul 21 18:00:15 kujegerXPS.localdomain abrt-server[21824]: Deleting problem directory ccpp-2017-07-21-18:00:14.832829-16045 (dup of ccpp-2017-07-16-18:28:22.628060-16045) Jul 21 18:00:16 kujegerXPS.localdomain abrt-notification[21889]: Process 17796 (glib-pacrunner) crashed in EncodeLatin1(js::ExclusiveContext*, JSString*)() Which points me towards bug 1459779 . Restarting abrtd.service made the cpu use go away, and hopefully removing the system proxy will prevent it from happening again.
$journalctl -b -u abrtd.service -- Logs begin at Sun 2017-05-14 15:53:29 PDT, end at Thu 2017-09-21 21:29:20 PDT. -- Sep 20 20:12:26 localhost.localdomain systemd[1]: Starting ABRT Automated Bug Reporting Tool... Sep 20 20:12:28 localhost.localdomain abrtd[780]: '/var/spool/abrt/ccpp-2017-07-17-15:59:41.912833-808.new' is not a problem directory Sep 20 20:12:28 localhost.localdomain systemd[1]: Started ABRT Automated Bug Reporting Tool. Sep 21 07:20:26 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-17-15:22:11.120527-820' Sep 21 07:20:32 localhost.localdomain abrt-notification[11473]: Process 11360 (flatpak) crashed in meta_fetch_on_complete() Sep 21 10:37:18 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-14-13:53:06.331700-813' Sep 21 10:37:18 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-11-06:18:59.612218-811' Sep 21 10:37:26 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-07:20:26.836821-811' Sep 21 10:37:26 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:37:17.333137-811' Sep 21 10:37:35 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-11-06:17:46.540844-811' Sep 21 10:37:46 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:37:25.885816-811' Sep 21 10:37:51 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:37:35.734132-811' Sep 21 10:37:58 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:37:44.741708-811' Sep 21 10:38:02 localhost.localdomain abrt-server[18900]: Path '/var/spool/abrt/ccpp-2017-09-21-10:37:17.333137-811' isn't directory Sep 21 10:38:35 localhost.localdomain abrt-server[18865]: Lock file '.lock' is locked by process 1989 Sep 21 10:38:36 localhost.localdomain abrt-server[18865]: Lock file '.lock' is locked by process 1989 Sep 21 10:38:36 localhost.localdomain abrt-notification[19292]: Process 18072 (firefox) crashed in ??() Sep 21 10:38:55 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:37:13.706905-811' Sep 21 10:39:04 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:38:53.662546-811' Sep 21 10:39:05 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:38:55.506746-811' Sep 21 10:39:07 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:39:04.109904-811' Sep 21 10:40:18 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting new directory 'ccpp-2017-09-21-10:40:17.721569-811' Sep 21 10:40:25 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:38:53.239261-811' Sep 21 10:40:25 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:40:18.567977-811' Sep 21 10:40:27 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:40:24.455640-811' Sep 21 10:43:16 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting new directory 'ccpp-2017-09-21-10:43:16.311728-811' Sep 21 10:43:17 localhost.localdomain abrt-server[20286]: Deleting problem directory ccpp-2017-09-21-10:43:15.981492-811 (dup of ccpp-2017-09-21-10:40:17.379345-811) Sep 21 10:43:17 localhost.localdomain abrt-notification[20334]: Process 19181 (firefox) crashed in ??() Sep 21 10:43:18 localhost.localdomain abrt-server[20329]: Deleting problem directory ccpp-2017-09-21-10:43:16.663176-811 (dup of ccpp-2017-09-21-10:40:17.379345-811) Sep 21 10:43:20 localhost.localdomain abrt-server[20373]: Deleting problem directory ccpp-2017-09-21-10:43:17.130431-811 (dup of ccpp-2017-09-21-10:40:17.379345-811) Sep 21 10:43:24 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting new directory 'ccpp-2017-09-21-10:43:18.174259-811' Sep 21 10:47:13 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting new directory 'ccpp-2017-09-21-10:47:12.552313-811' Sep 21 10:47:15 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:47:13.49363-811.n Sep 21 10:47:15 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:47:13.49363-811' Sep 21 10:47:15 localhost.localdomain abrt-server[22452]: Deleting problem directory ccpp-2017-09-21-10:47:12.10615-811 (dup of ccpp-2017-09-21-10:40:17.379345-811) Sep 21 10:47:25 localhost.localdomain abrt-server[22452]: Lock file '.lock' is locked by process 1989 Sep 21 10:47:28 localhost.localdomain abrt-server[22544]: Path '/var/spool/abrt/ccpp-2017-09-21-10:47:13.49363-811' isn't directory Sep 21 10:56:28 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:40:17.379345-811' Sep 21 10:56:28 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-11-08:11:26.761201-811' Sep 21 10:56:28 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:47:27.856537-811' Sep 21 10:56:31 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:56:27.961784-811. Sep 21 10:56:36 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:56:27.961784-811' Sep 21 10:56:42 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:56:30.643112-811' Sep 21 10:56:44 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'oops-2017-09-14-20:10:08-823-0' Sep 21 10:56:44 localhost.localdomain abrtd[780]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2017-09-21-10:56:36.402025-811' well, i hit this bug. Here is my abrtd service logs. Restarting abrtd seems to make work as mention above
This message is a reminder that Fedora 25 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 25. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '25'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 25 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
I see the same issue on Fedora 26.
+1 same here, same error message as in comment 16
Got same problem on Fedora 27 bt in gdb: # gdb $(which abrtd) $(pidof abrtd) (gdb) bt #0 0x00007f99772d136b in poll () at /lib64/libc.so.6 #1 0x00007f99775f3ed9 in g_main_context_iterate.isra () at /lib64/libglib-2.0.so.0 #2 0x00007f99775f4272 in g_main_loop_run () at /lib64/libglib-2.0.so.0 #3 0x000055c302924fe3 in main (argc=<optimized out>, argv=<optimized out>) at abrtd.c:880 strace (looped): poll([{fd=3, events=POLLIN|POLLPRI}, {fd=5, events=POLLIN}, {fd=6, events=POLLIN|POLLPRI}, {fd=8, events=POLLIN|POLLPRI}, {fd=15, events=POLLIN}], 5, -1) = 1 ([{fd=15, revents=POLLNVAL}]) journal -u abrtd Jan 09 14:16:00 emo.stationary abrt-server[20117]: Deleting problem directory ccpp-2018-01-09-14:15:59.973883-20088 (dup of ccpp-2017-12-20-10:50:24.102205-14210) Jan 09 14:16:00 emo.stationary abrt-notification[20208]: Process 14210 (clang-4.0) crashed in ??()
Can confirm this on Fedora 27.
Can confirm this on Fedora 27. Strace output almost identical to comment #9, spammed repeatedly. Possibly unrelated, but I noticed the problem after I had closed Kerbal Space Program. I'm running a Lenovo ThinkPad X230 with 16GB RAM and a 128GB SSD. I think I've seen this behavior before, but previously I've just restarted the computer.
I just had a very similar situation happen. However, it happened just after a Firefox crash. It may be that abrtd is just spending a lot of effort to analyze the large amount of Firefox crash information. Here are the kernel messages just when abrtd started consuming 100% of a CPU. Jan 31 20:16:21 idefix audit[8203]: ANOM_ABEND auid=1316 uid=1316 gid=1316 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 pid=8203 comm="Chrome_~dThread" exe=2F7573722F6C696236342F66697265666F782F66697265666F78202864656C6574656429 sig=11 res=1 Jan 31 20:16:21 idefix kernel: show_signal_msg: 7 callbacks suppressed Jan 31 20:16:21 idefix kernel: Chrome_~dThread[8206]: segfault at 0 ip 00007fb3a106f563 sp 00007fb39d826b00 error 6 in libxul.so (deleted)[7fb3a0b50000+4d0f000] Jan 31 20:16:21 idefix kernel: Chrome_~dThread[8996]: segfault at 0 ip 00007f536996e663 sp 00007f5366125b00 error 6 in libxul.so[7f536944f000+4d10000] Jan 31 20:16:21 idefix kernel: Chrome_~dThread[3988]: segfault at 0 ip 00007f290bc6f563 sp 00007f2908426b00 error 6 in libxul.so (deleted)[7f290b750000+4d0f000] Jan 31 20:16:21 idefix kernel: Chrome_~dThread[2141]: segfault at 0 ip 00007f56dba6f563 sp 00007f56d8226b00 error 6 in libxul.so (deleted)[7f56db550000+4d0f000] Jan 31 20:16:21 idefix audit[3985]: ANOM_ABEND auid=1316 uid=1316 gid=1316 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 pid=3985 comm="Chrome_~dThread" exe=2F7573722F6C696236342F66697265666F782F66697265666F78202864656C6574656429 sig=11 res=1 Jan 31 20:16:21 idefix audit[2139]: ANOM_ABEND auid=1316 uid=1316 gid=1316 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 pid=2139 comm="Chrome_~dThread" exe=2F7573722F6C696236342F66697265666F782F66697265666F78202864656C6574656429 sig=11 res=1 Jan 31 20:16:21 idefix audit[8993]: ANOM_ABEND auid=1316 uid=1316 gid=1316 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 pid=8993 comm="Chrome_~dThread" exe="/usr/lib64/firefox/firefox" sig=11 res=1 Jan 31 20:16:21 idefix systemd[1]: Created slice system-systemd\x2dcoredump.slice. Jan 31 20:16:21 idefix systemd[1]: Started Process Core Dump (PID 9025/UID 0). Jan 31 20:16:21 idefix audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@0-9025-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 31 20:16:21 idefix systemd[1]: Started Process Core Dump (PID 9023/UID 0). Jan 31 20:16:21 idefix audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@3-9023-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 31 20:16:22 idefix systemd[1]: Started Process Core Dump (PID 9027/UID 0). Jan 31 20:16:22 idefix audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@2-9027-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 31 20:16:22 idefix systemd[1]: Started Process Core Dump (PID 9024/UID 0). Jan 31 20:16:22 idefix audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@1-9024-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 31 20:16:23 idefix systemd-coredump[9028]: Process 8993 (Web Content) of user 1316 dumped core. Stack trace of thread 8996: #0 0x00007f536996e663 _ZN7mozilla3ipc14MessageChannel22OnChannelErrorFromLinkEv (libxul.so) #1 0x00007f536996e75f _ZN7mozilla3ipc11ProcessLink14OnChannelErrorEv (libxul.so) #2 0x00007f5369957726 event_process_active_single_queue.isra.118 (libxul.so) #3 0x00007f5369957d7f event_base_loop (libxul.so) #4 0x00007f536993e80e _ZN4base19MessagePumpLibevent3RunEPNS_11MessagePump8DelegateE (libxul.so) #5 0x00007f53699410c0 _ZN11MessageLoop3RunEv (libxul.so) #6 0x00007f536994d9d9 _ZN4base6Thread10ThreadMainEv (libxul.so) #7 0x00007f536993e35a _ZL10ThreadFuncPv (libxul.so) #8 0x00007f537924261b start_thread (libpthread.so.0) #9 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 8997: #0 0x00007f5379249266 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f53684e2033 pt_TimedWait (libnspr4.so) #2 0x00007f53684e24e6 PR_WaitCondVar (libnspr4.so) #3 0x00007f5369c9d9f7 _ZL12WatchdogMainPv (libxul.so) #4 0x00007f53684e80eb _pt_root (libnspr4.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 9013: #0 0x00007f53792491ba pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9e6d _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #2 0x00007f5369559858 _ZN11TimerThread3RunEv (libxul.so) #3 0x00007f53695545ae _ZN8nsThread16ProcessNextEventEbPb (libxul.so) #4 0x00007f536955d668 _Z19NS_ProcessNextEventP9nsIThreadb (libxul.so) #5 0x00007f536996c77a _ZN7mozilla3ipc28MessagePumpForNonMainThreads3RunEPN4base11MessagePump8DelegateE (libxul.so) #6 0x00007f53699410c0 _ZN11MessageLoop3RunEv (libxul.so) #7 0x00007f5369554cfb _ZN8nsThread10ThreadFuncEPv (libxul.so) #8 0x00007f53684e80eb _pt_root (libnspr4.so) #9 0x00007f537924261b start_thread (libpthread.so.0) #10 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 9005: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x0000559f3bdf9e05 _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #3 0x00007f536c8baae8 _ZN2js12HelperThread10threadLoopEv (libxul.so) #4 0x00007f536c8afd5a _ZN2js6detail16ThreadTrampolineIRFvPvEJPNS_12HelperThreadEEE5StartES2_ (libxul.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 9002: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x0000559f3bdf9e05 _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #3 0x00007f536c8baae8 _ZN2js12HelperThread10threadLoopEv (libxul.so) #4 0x00007f536c8afd5a _ZN2js6detail16ThreadTrampolineIRFvPvEJPNS_12HelperThreadEEE5StartES2_ (libxul.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 9003: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x0000559f3bdf9e05 _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #3 0x00007f536c8baae8 _ZN2js12HelperThread10threadLoopEv (libxul.so) #4 0x00007f536c8afd5a _ZN2js6detail16ThreadTrampolineIRFvPvEJPNS_12HelperThreadEEE5StartES2_ (libxul.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 8999: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x0000559f3bdf9e05 _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #3 0x00007f536c8baae8 _ZN2js12HelperThread10threadLoopEv (libxul.so) #4 0x00007f536c8afd5a _ZN2js6detail16ThreadTrampolineIRFvPvEJPNS_12HelperThreadEEE5StartES2_ (libxul.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 9012: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x00007f5369543b66 _ZN7mozilla11HangMonitor10ThreadMainEPv (libxul.so) #3 0x00007f53684e80eb _pt_root (libnspr4.so) #4 0x00007f537924261b start_thread (libpthread.so.0) #5 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 8993: #0 0x00007f537846d3db __poll (libc.so.6) #1 0x00007f536b4951f4 _ZL11PollWrapperP8_GPollFDji (libxul.so) #2 0x00007f537424de99 g_main_context_iterate.isra.23 (libglib-2.0.so.0) #3 0x00007f537424dfac g_main_context_iteration (libglib-2.0.so.0) #4 0x00007f536b49526f _ZN10nsAppShell22ProcessNextNativeEventEb (libxul.so) #5 0x00007f536b45ebf2 _ZN14nsBaseAppShell24DoProcessNextNativeEventEb (libxul.so) #6 0x00007f536b45eda6 _ZN14nsBaseAppShell18OnProcessNextEventEP17nsIThreadInternalb (libxul.so) #7 0x00007f5369554485 _ZN8nsThread16ProcessNextEventEbPb (libxul.so) #8 0x00007f536955d668 _Z19NS_ProcessNextEventP9nsIThreadb (libxul.so) #9 0x00007f536996c530 _ZN7mozilla3ipc11MessagePump3RunEPN4base11MessagePump8DelegateE (libxul.so) #10 0x00007f53699410c0 _ZN11MessageLoop3RunEv (libxul.so) #11 0x00007f536b45a298 _ZN14nsBaseAppShell3RunEv (libxul.so) #12 0x00007f536c3ad397 _Z15XRE_RunAppShellv (libxul.so) #13 0x00007f53699410c0 _ZN11MessageLoop3RunEv (libxul.so) #14 0x00007f536c3ad863 _Z20XRE_InitChildProcessiPPcPK12XREChildData (libxul.so) #15 0x0000559f3bdeb5ab _Z20content_process_mainPN7mozilla9BootstrapEiPPc (firefox) #16 0x0000559f3bdeaf00 main (firefox) #17 0x00007f537838300a __libc_start_main (libc.so.6) #18 0x0000559f3bdeb1aa _start (firefox) Stack trace of thread 9010: #0 0x00007f537846d3db __poll (libc.so.6) #1 0x00007f53684e3ef0 _pr_poll_with_poll (libnspr4.so) #2 0x00007f53695fe33d _ZN7mozilla3net24nsSocketTransportService4PollEPjPNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (libxul.so) #3 0x00007f53696068f8 _ZN7mozilla3net24nsSocketTransportService15DoPollIterationEPNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (libxul.so) #4 0x00007f5369606c9f _ZN7mozilla3net24nsSocketTransportService3RunEv (libxul.so) #5 0x00007f53695545ae _ZN8nsThread16ProcessNextEventEbPb (libxul.so) #6 0x00007f536955d668 _Z19NS_ProcessNextEventP9nsIThreadb (libxul.so) #7 0x00007f536996c77a _ZN7mozilla3ipc28MessagePumpForNonMainThreads3RunEPN4base11MessagePump8DelegateE (libxul.so) #8 0x00007f53699410c0 _ZN11MessageLoop3RunEv (libxul.so) #9 0x00007f5369554cfb _ZN8nsThread10ThreadFuncEPv (libxul.so) #10 0x00007f53684e80eb _pt_root (libnspr4.so) #11 0x00007f537924261b start_thread (libpthread.so.0) #12 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 9006: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x0000559f3bdf9e05 _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #3 0x00007f536c8baae8 _ZN2js12HelperThread10threadLoopEv (libxul.so) #4 0x00007f536c8afd5a _ZN2js6detail16ThreadTrampolineIRFvPvEJPNS_12HelperThreadEEE5StartES2_ (libxul.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 9000: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x0000559f3bdf9e05 _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #3 0x00007f536c8baae8 _ZN2js12HelperThread10threadLoopEv (libxul.so) #4 0x00007f536c8afd5a _ZN2js6detail16ThreadTrampolineIRFvPvEJPNS_12HelperThreadEEE5StartES2_ (libxul.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 9008: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x0000559f3bdf9e05 _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #3 0x00007f536c8baae8 _ZN2js12HelperThread10threadLoopEv (libxul.so) #4 0x00007f536c8afd5a _ZN2js6detail16ThreadTrampolineIRFvPvEJPNS_12HelperThreadEEE5StartES2_ (libxul.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 8998: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x0000559f3bdf9e05 _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #3 0x00007f536c8baae8 _ZN2js12HelperThread10threadLoopEv (libxul.so) #4 0x00007f536c8afd5a _ZN2js6detail16ThreadTrampolineIRFvPvEJPNS_12HelperThreadEEE5StartES2_ (libxul.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 9001: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x0000559f3bdf9e05 _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #3 0x00007f536c8baae8 _ZN2js12HelperThread10threadLoopEv (libxul.so) #4 0x00007f536c8afd5a _ZN2js6detail16ThreadTrampolineIRFvPvEJPNS_12HelperThreadEEE5StartES2_ (libxul.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 9004: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x0000559f3bdf9e05 _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #3 0x00007f536c8baae8 _ZN2js12HelperThread10threadLoopEv (libxul.so) #4 0x00007f536c8afd5a _ZN2js6detail16ThreadTrampolineIRFvPvEJPNS_12HelperThreadEEE5StartES2_ (libxul.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 9007: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x0000559f3bdf9e05 _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #3 0x00007f536c8baae8 _ZN2js12HelperThread10threadLoopEv (libxul.so) #4 0x00007f536c8afd5a _ZN2js6detail16ThreadTrampolineIRFvPvEJPNS_12HelperThreadEEE5StartES2_ (libxul.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Stack trace of thread 9009: #0 0x00007f5379248cbb pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x0000559f3bdf9c37 _ZN7mozilla6detail21ConditionVariableImpl4waitERNS0_9MutexImplE (firefox) #2 0x0000559f3bdf9e05 _ZN7mozilla6detail21ConditionVariableImpl8wait_forERNS0_9MutexImplERKNS_16BaseTimeDurationINS_27TimeDurationValueCalculatorEEE (firefox) #3 0x00007f536c8baae8 _ZN2js12HelperThread10threadLoopEv (libxul.so) #4 0x00007f536c8afd5a _ZN2js6detail16ThreadTrampolineIRFvPvEJPNS_12HelperThreadEEE5StartES2_ (libxul.so) #5 0x00007f537924261b start_thread (libpthread.so.0) #6 0x00007f537847998f __clone (libc.so.6) Jan 31 20:16:23 idefix audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@0-9025-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 31 20:16:23 idefix systemd-coredump[9029]: Process 8203 (Web Content) of user 1316 dumped core. Stack trace of thread 8206: #0 0x00007fb3a106f563 n/a (/usr/lib64/firefox/libxul.so (deleted)) #1 0x0000000000000000 n/a (n/a) Jan 31 20:16:24 idefix audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@3-9023-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 31 20:16:24 idefix systemd-coredump[9030]: Process 3985 (file:// Content) of user 1316 dumped core. Stack trace of thread 3988: #0 0x00007f290bc6f563 n/a (/usr/lib64/firefox/libxul.so (deleted)) #1 0x0000000000000000 n/a (n/a) Jan 31 20:16:25 idefix audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@2-9027-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 31 20:16:25 idefix abrt-server[9038]: Lock file '.lock' is locked by process 1554 Jan 31 20:16:26 idefix abrt-notification[9098]: Process 8993 (firefox) crashed in mozilla::ipc::MessageChannel::OnChannelErrorFromLink()() Jan 31 20:16:27 idefix systemd-coredump[9031]: Process 2139 (Web Content) of user 1316 dumped core. Stack trace of thread 2141: #0 0x00007f56dba6f563 n/a (/usr/lib64/firefox/libxul.so (deleted)) #1 0x0000000000000000 n/a (n/a) Jan 31 20:16:27 idefix abrt-notification[9259]: Process 8203 (firefox) crashed in ??() Jan 31 20:16:27 idefix audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@1-9024-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 31 20:16:28 idefix kernel: CPU6: Core temperature above threshold, cpu clock throttled (total events = 11) Jan 31 20:16:28 idefix kernel: CPU1: Package temperature above threshold, cpu clock throttled (total events = 31) Jan 31 20:16:28 idefix kernel: CPU2: Core temperature above threshold, cpu clock throttled (total events = 11) Jan 31 20:16:28 idefix kernel: CPU4: Package temperature above threshold, cpu clock throttled (total events = 31) Jan 31 20:16:28 idefix kernel: CPU0: Package temperature above threshold, cpu clock throttled (total events = 31) Jan 31 20:16:28 idefix kernel: CPU7: Package temperature above threshold, cpu clock throttled (total events = 31) Jan 31 20:16:28 idefix kernel: CPU3: Package temperature above threshold, cpu clock throttled (total events = 31) Jan 31 20:16:28 idefix kernel: CPU5: Package temperature above threshold, cpu clock throttled (total events = 31) Jan 31 20:16:28 idefix kernel: CPU2: Package temperature above threshold, cpu clock throttled (total events = 31) Jan 31 20:16:28 idefix kernel: CPU6: Package temperature above threshold, cpu clock throttled (total events = 31) Jan 31 20:16:28 idefix kernel: CPU2: Core temperature/speed normal Jan 31 20:16:28 idefix kernel: CPU4: Package temperature/speed normal Jan 31 20:16:28 idefix kernel: CPU1: Package temperature/speed normal Jan 31 20:16:28 idefix kernel: CPU6: Core temperature/speed normal Jan 31 20:16:28 idefix kernel: CPU0: Package temperature/speed normal Jan 31 20:16:28 idefix kernel: CPU6: Package temperature/speed normal Jan 31 20:16:28 idefix kernel: CPU2: Package temperature/speed normal Jan 31 20:16:28 idefix kernel: CPU5: Package temperature/speed normal Jan 31 20:16:28 idefix kernel: CPU7: Package temperature/speed normal Jan 31 20:16:28 idefix kernel: CPU3: Package temperature/speed normal Jan 31 20:16:28 idefix abrtd[963]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2018-01-31-20:16:24.263087-8203' Jan 31 20:16:29 idefix abrtd[963]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2018-01-31-20:16:25.461418-3985' Jan 31 20:16:29 idefix abrtd[963]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2018-01-31-20:16:23.513165-8993' Jan 31 20:16:29 idefix abrtd[963]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'ccpp-2018-01-31-14:57:42.436881-1414' Jan 31 20:16:29 idefix abrtd[963]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'oops-2018-01-31-09:22:45-967-0' Jan 31 20:16:29 idefix abrtd[963]: Size of '/var/spool/abrt' >= 5000 MB (MaxCrashReportsSize), deleting old directory 'Python3-2018-01-31-14:57:42-1498' Jan 31 20:16:29 idefix abrt-server[9160]: '/var/spool/abrt/ccpp-2018-01-31-20:16:25.461418-3985' does not exist Jan 31 20:16:29 idefix abrt-server[9160]: 'post-create' on '/var/spool/abrt/ccpp-2018-01-31-20:16:25.461418-3985' exited with 1 Jan 31 20:16:29 idefix abrt-server[9160]: Deleting problem directory '/var/spool/abrt/ccpp-2018-01-31-20:16:25.461418-3985' Jan 31 20:16:29 idefix abrt-server[9160]: '/var/spool/abrt/ccpp-2018-01-31-20:16:25.461418-3985' does not exist Jan 31 20:17:50 idefix kernel: acpi INT3400:00: Unsupported event [0x86] Jan 31 20:21:28 idefix kernel: CPU0: Core temperature/speed normal Jan 31 20:21:28 idefix kernel: CPU4: Core temperature/speed normal Jan 31 20:21:28 idefix kernel: CPU4: Package temperature/speed normal Jan 31 20:21:28 idefix kernel: CPU0: Package temperature/speed normal Jan 31 20:21:28 idefix kernel: CPU6: Package temperature/speed normal Jan 31 20:21:28 idefix kernel: CPU2: Package temperature/speed normal Jan 31 20:21:28 idefix kernel: CPU1: Package temperature/speed normal Jan 31 20:21:28 idefix kernel: CPU5: Package temperature/speed normal Jan 31 20:21:28 idefix kernel: CPU3: Package temperature/speed normal Jan 31 20:21:28 idefix kernel: CPU7: Package temperature/speed normal Jan 31 20:23:04 idefix org.xfce.FileManager[1254]: Failed to connect to session manager: Failed to connect to the session manager: SESSION_MANAGER environment variable not defined Jan 31 20:23:56 idefix kernel: perf: interrupt took too long (2506 > 2500), lowering kernel.perf_event_max_sample_rate to 79000 Jan 31 20:24:27 idefix systemd[1]: Starting dnf makecache... Jan 31 20:24:27 idefix dnf[9856]: Metadata cache refreshed recently. Jan 31 20:24:27 idefix systemd[1]: Started dnf makecache. It's been about 15 minutes, and abrtd is still running hard so I'm going to kill it.
This message is a reminder that Fedora 26 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 26. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '26'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 26 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Happened to me more times in Fedora 27 too
This issue is still in fedora 28: $ lsb_release -d Description: Fedora release 28 (Twenty Eight) $ sudo strace -p 1044 poll([{fd=4, events=POLLIN|POLLPRI}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN|POLLPRI}, {fd=10, events=POLLIN|POLLPRI}, {fd=16, events=POLLIN}], 5, -1) = 1 ([{fd=16, revents=POLLNVAL}]) [ ... repeated ] $ sudo du -sh /var/spool/abrt/ 558M /var/spool/abrt/ $ sudo gdb $(which abrtd) $(pidof abrtd) (gdb) bt #0 0x00007fc86c61a929 in ?? () from /lib64/libc.so.6 #1 0x000055940d51bd8c in ?? () #2 0x000055940d54d790 in ?? () #3 0x000055940d52e890 in ?? () #4 0x0000000000000005 in ?? () #5 0x000055940d54d790 in ?? () #6 0x00007fc86c936b06 in g_get_worker_context () at gmain.c:5786 #7 0x0000000000000000 in ?? () $ journalctl -b -u abrtd.service # Shows some "deleting directory", very verbose...
Still present in Fedora 28, yes
It happens in Fedora 28 as well. strace shows this repeatedly: poll([{fd=4, events=POLLIN|POLLPRI}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN|POLLPRI}, {fd=10, events=POLLIN|POLLPRI}, {fd=16, events=POLLIN}], 5, -1) = 1 ([{fd=16, revents=POLLNVAL}])
I also experienced the same problem. Fedora 28 4.17.11-200.fc28.x86_64. strace shows below repeatedly: poll([{fd=3, events=POLLIN|POLLPRI}, {fd=5, events=POLLIN}, {fd=6, events=POLLIN|POLLPRI}, {fd=9, events=POLLIN|POLLPRI}, {fd=14, events=POLLIN}], 5, -1) = 1 ([{fd=14, revents=POLLNVAL}])
Same issue in F28. Will add strace output.
Created attachment 1479341 [details] strace output (Fedora 28)
I am able to reproduce this on F28 with following: 1) Set MaxCrashReportsSize=100 in /etc/abrt/abrt.conf 2) Add following to the /etc/libreport/events.d/abrt_event.conf - This creates additional file (15MB) in dump dir that helps to fill /var/spool/abrt directory. EVENT=post-create dd if=/dev/urandom of=huge count=15 bs=1048576 3) Unleash crash typhoon. Result: abrtd process consuming 100% of CPU. strace output: poll([{fd=3, events=POLLIN|POLLPRI}, {fd=5, events=POLLIN}, {fd=6, events=POLLIN|POLLPRI}, {fd=8, events=POLLIN|POLLPRI}, {fd=13, events=POLLIN}], 5, -1) = 1 ([{fd=13, revents=POLLNVAL}])
Upstream PR: https://github.com/abrt/abrt/pull/1321
I seem to be having this bug on abrtd 2.14.2 on fedora 32. abrt-handle-eve and abrt and abrtd-dbus are all using a lot of cpu on my laptop.
Please open a new bug about this. It is unlikely to have the same cause, as the problem that caused this issue has been addressed. A strace can tell you whether the issue you are seeing may also be a problematic double close().