RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1319704 - abrt-dbus memory leak
Summary: abrt-dbus memory leak
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: abrt
Version: 7.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Jakub Filak
QA Contact: Martin Kyral
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-21 11:11 UTC by evgeniypatlan
Modified: 2016-11-04 03:09 UTC (History)
8 users (show)

Fixed In Version: abrt-2.1.11-39.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-04 03:09:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
abrt_dbus memory leak (55.34 KB, image/jpeg)
2016-03-21 11:11 UTC, evgeniypatlan
no flags Details
Patch 1/1: Fix memory leaks in abrt-dbus (6.09 KB, patch)
2016-05-04 07:27 UTC, Matej Habrnal
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2307 0 normal SHIPPED_LIVE abrt, gnome-abrt, and libreport bug fix and enhancement update 2016-11-03 13:40:24 UTC

Description evgeniypatlan 2016-03-21 11:11:24 UTC
Created attachment 1138535 [details]
abrt_dbus memory leak

Hi.

Description of problem:
It was detected that abrt-dbus-2.1.11-35.0.1.0.1.el7 has a memory leak.
Sometimes it happens that it might use more then 3Gb of RAM.
In general I have added a short script which hourly collects the RAM size of abrt-dbus. So in that attached graph you can see how the memory usage grows.

Version-Release number of selected component (if applicable):
Installed Packages
Name        : abrt-dbus
Arch        : x86_64
Version     : 2.1.11
Release     : 35.0.1.0.1.el7
Size        : 129 k
Repo        : installed
Summary     : ABRT DBus service
URL         : https://fedorahosted.org/abrt/
License     : GPLv2+
Description : ABRT DBus service which provides org.freedesktop.problems API on dbus and
            : uses PolicyKit to authorize to access the problem data.



How reproducible:
I wasn't able to detect any steps how it might be reproduced. On the usual system without any cores the memory size increases every hour.

Using valgrind I've found the following information:
============
==7586== LEAK SUMMARY:
==7586== definitely lost: 15 bytes in 1 blocks
==7586== indirectly lost: 0 bytes in 0 blocks
==7586== possibly lost: 7,572 bytes in 136 blocks
==7586== still reachable: 100,671 bytes in 1,139 blocks
==7586== suppressed: 0 bytes in 0 blocks
==7586== Reachable blocks (those to which a pointer was found) are not shown.
==7586== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==7586== 
==7586== ERROR SUMMARY: 132 errors from 132 contexts (suppressed: 1 from 1)
============

Also from gdb I saw:
=====
#0 0x00007fd91e9ca00d in poll () from /lib64/libc.so.6
#1 0x00007fd91eeeac94 in g_main_context_iterate.isra.24 () from /lib64/libglib-2.0.so.0
#2 0x00007fd91eeeafda in g_main_loop_run () from /lib64/libglib-2.0.so.0
#3 0x00007fd920436d6d in main (argc=2, argv=<optimized out>) at abrt-dbus.c:1054
=====

Once I get strace for abrt-dbus process I sow a lot of:
================
[pid 16729] write(3, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] write(3, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] write(3, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] write(3, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] write(3, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] write(3, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] write(3, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] write(3, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] write(3, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] futex(0x7f293ee32860, FUTEX_WAKE_PRIVATE, 1) = 1
[pid 16709] <... futex resumed> )       = 0
[pid 16729] poll([{fd=4, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid 16709] futex(0x7f293ee32860, FUTEX_WAKE_PRIVATE, 1) = 0
[pid 16729] write(5, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] poll([{fd=5, events=POLLIN}, {fd=4, events=POLLIN}], 2, 4294967295) = 1 ([{fd=5, revents=POLLIN}])
[pid 16729] poll([{fd=5, events=POLLIN}, {fd=4, events=POLLIN}], 2, 4294967295) = 1 ([{fd=5, revents=POLLIN}])
[pid 16729] read(5, "\1\0\0\0\0\0\0\0", 16) = 8
[pid 16729] poll([{fd=5, events=POLLIN}, {fd=4, events=POLLIN}], 2, 4294967295 <unfinished ...>
[pid 16709] poll([{fd=3, events=POLLIN}], 1, 0) = 1 ([{fd=3, revents=POLLIN}])
[pid 16709] read(3, "\0367\0\0\0\0\0\0", 16) = 8
[pid 16709] poll([{fd=3, events=POLLIN}], 1, 122393^C <unfinished ...>
[pid 16729] <... poll resumed> )        = 1 ([{fd=4, revents=POLLIN}])
[pid 16729] read(5, 0x7f2935e13c40, 16) = -1 EAGAIN (Resource temporarily unavailable)
[pid 16729] write(5, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] recvmsg(4, {msg_name(0)=NULL, msg_iov(1)=[{"l\4\1\1'\0\0\0\327\26\n\0\211\0\0\0", 16}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 16
[pid 16729] poll([{fd=4, events=POLLIN}], 1, 0) = 1 ([{fd=4, revents=POLLIN}])
[pid 16729] recvmsg(4, {msg_name(0)=NULL, msg_iov(1)=[{"\1\1o\0\25\0\0\0/org/freedesktop/DBus\0\0\0\2\1s\0\24\0\0\0org.freedesktop.DBus\0\0\0\0\3\1s\0\20\0\0\0NameOwnerChanged\0\0\0\0\0\0\0\0\7\1s"..., 183}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 183
[pid 16729] write(5, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] write(5, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] poll([{fd=5, events=POLLIN}], 1, 0) = 1 ([{fd=5, revents=POLLIN}])
[pid 16729] write(3, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16709] <... poll resumed> )        = 1 ([{fd=3, revents=POLLIN}])
[pid 16729] futex(0x7f293ee36d70, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
[pid 16709] futex(0x7f293ee36d70, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
[pid 16729] <... futex resumed> )       = 0
[pid 16709] <... futex resumed> )       = -1 EAGAIN (Resource temporarily unavailable)
[pid 16729] futex(0x7f293ee36d70, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
[pid 16709] read(3, "\1\0\0\0\0\0\0\0", 16) = 8
[pid 16709] futex(0x7f293ee36d70, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
[pid 16729] <... futex resumed> )       = 0
[pid 16709] <... futex resumed> )       = 1
[pid 16729] futex(0x7f293ee36d70, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
[pid 16709] futex(0x7f293ee32860, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
[pid 16729] <... futex resumed> )       = 0
[pid 16729] write(3, "\1\0\0\0\0\0\0\0", 8) = 8
[pid 16729] write(3, "\1\0\0\0\0\0\0\0", 8) = 8
================

So please check and provide the solution.
Thanks in advance.
--
Sincerely,
Evgeniy Patlan

Comment 2 evgeniypatlan 2016-04-06 11:17:54 UTC
Hi.
Any news?
--
Sincerely,
Evgeniy Patlan

Comment 3 Sergey Onanchenko 2016-04-29 05:16:45 UTC
Hello All,

A lot of servers are affected by the described issue.

We really appreciate any help you can provide.

Comment 4 Jakub Filak 2016-04-29 06:16:33 UTC
abrt-dbus should not be a long running service, so the leak should not cause any harm. The service is started on demand by dbus daemon when something wants use "org.freedesktop.problems" address and the service should exit 133s after the last request made by any client is accomplished.

For example, if you start a new shell (bash, virtual terminal), the profile.d script /etc/profile.d/abrt-console-notification.sh checks if there are new crashes over abrt-dbus and if there are no more request to abrt-dbus it exits after 133s:

[Fri Apr 29 08:05:08 jfilak@rhel7 ~]
$ ps aux | grep abrt-dbus
root      5252  0.0  0.1 332760  5256 ?        Sl   08:05   0:00 /usr/sbin/abrt-dbus -t133
jfilak    5289  0.0  0.0 112644   956 pts/4    S+   08:05   0:00 grep --color=auto abrt-dbus
[Fri Apr 29 08:05:19 jfilak@rhel7 ~]
$ sleep 135; ps aux | grep abrt-dbus
jfilak    5323  0.0  0.0 112644   960 pts/4    S+   08:07   0:00 grep --color=auto abrt-dbus
[Fri Apr 29 08:07:50 jfilak@rhel7 ~]
$

There must be something that keeps abrt-dbus alive.

Anyway, I will try to find and fix the leak if possible.

Comment 5 Jakub Filak 2016-05-02 09:00:23 UTC
Upstream pull request:
https://github.com/abrt/abrt/pull/1140

Comment 7 Matej Habrnal 2016-05-04 07:27:51 UTC
Created attachment 1153724 [details]
Patch 1/1: Fix memory leaks in abrt-dbus

Comment 10 errata-xmlrpc 2016-11-04 03:09:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2307.html


Note You need to log in before you can comment on or make changes to this bug.