Bug 1308780 - systemd Using 4GB RAM after 18 Days of Uptime
systemd Using 4GB RAM after 18 Days of Uptime
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: systemd (Show other bugs)
7.2
x86_64 Linux
unspecified Severity urgent
: rc
: ---
Assigned To: systemd-maint
qe-baseos-daemons
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-15 22:42 EST by meridionaljet
Modified: 2018-05-30 20:58 EDT (History)
13 users (show)

See Also:
Fixed In Version: systemd-219-19.el7_2.20
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-01-25 08:38:06 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Output of "sudo journalctl" and "systemctl -t service" (10.12 KB, text/plain)
2016-02-15 22:42 EST, meridionaljet
no flags Details
Output of "systemd-analyze dump" (622.74 KB, text/plain)
2016-02-16 12:29 EST, meridionaljet
no flags Details
Output of atop during server high load (11.21 KB, text/plain)
2016-02-16 12:47 EST, meridionaljet
no flags Details
systemd-analyze_dump (1.34 MB, text/plain)
2016-11-28 08:09 EST, SHAURYA
no flags Details

  None (edit)
Description meridionaljet 2016-02-15 22:42:43 EST
Created attachment 1127487 [details]
Output of "sudo journalctl" and "systemctl -t service"

Description of problem:

On a live webserver running CentOS 7.2, the systemd (PID 1) process has a memory leak of about 200 MB per day, currently up to 3.7 GB of RAM usage after 18 days uptime. Reboot of the server is periodically required to free the memory.


Version-Release number of selected component (if applicable):

systemd version 219

How reproducible:

Reproducible on this particular server by simply rebooting and watching RAM usage grow over time.

Actual results:

RAM usage of the PID 1 process increases by ~200 MB per day.

Expected results:

RAM usage should not increase.

Additional info:

The only heavy activity in the logs shown by "sudo journalctl" is related to numerous rsync SSH connections made by another production server. I've attached a sample of the journal log with real hostnames and IP addresses redacted. I've also attached the output of "systemctl -t service".
Comment 2 Lukáš Nykrýn 2016-02-16 03:50:02 EST
Sounds like https://github.com/systemd/systemd/issues/1961
Comment 3 meridionaljet 2016-02-16 08:41:52 EST
(In reply to Lukáš Nykrýn from comment #2)
> Sounds like https://github.com/systemd/systemd/issues/1961

Well not quite. The CPU is not spiked to 100% here, and running "systemctl list-unit-files" only results in ~60 session-*.scope* units. I also see no logind failures in "sudo journalctl -b -u systemd-logind" as in that issue.

There are 86 scope files and associated directories in /run/systemd/system/ on this server, which amount to ~20MB of disk space. I am seeing a lot of these files are up to 6 days old. Is this normal? The server has been up for 19 days so if this was the source of the leak I would have expected to see orphaned files as old as 19 days as well.
Comment 4 meridionaljet 2016-02-16 08:56:45 EST
Here is some output from systemd-cgtop showing resource usage of each active control group. Note that the problem is only showing up in the "root" path.

>Path                                                                          Tasks   %CPU   Memory  Input/s Output/s
>
>                                                                               296   30.5    11.3G   657.8K   893.0K
>system.sliceNetworkManager.service                                              1      -        -        -        -
>system.sliceauditd.service                                                      1      -        -        -        -
>system.slicecrond.service                                                       1      -        -        -        -
>system.slicedbus.service                                                        1      -        -        -        -
>system.sliceirqbalance.service                                                  1      -        -        -        -
>system.slicelvm2-lvmetad.service                                                1      -        -        -        -
>system.slicemariadb.service                                                     2      -        -        -        -
>system.slicenginx.service                                                      10      -        -        -        -
>system.slicephp-fpm.service                                                   101      -        -        -        -
>system.slicepolkit.service                                                      1      -        -        -        -
>system.slicepostfix.service                                                     3      -        -        -        -
>system.slicersyslog.service                                                     1      -        -        -        -
>system.slicesmartd.service                                                      1      -        -        -        -
>system.slicesshd.service                                                        2      -        -        -        -
>system.slicesystem-getty.slicegetty@tty1.service                               1      -        -        -        -
>system.slicesystemd-journald.service                                            1      -        -        -        -
>system.slicesystemd-logind.service                                              1      -        -        -        -
>system.slicesystemd-udevd.service                                               1      -        -        -        -
>system.slicetuned.service                                                       1      -        -        -        -
>system.slicewpa_supplicant.service                                              1      -        -        -        -
>user.slice/user-1000.slice/session-7170741.scope                                 4      -        -        -        -
Comment 5 Lukáš Nykrýn 2016-02-16 11:31:37 EST
If you run systemctl daemon-reexec does it decrease the amount of allocated memory?
Can you also attach output of systemd-analyze dump?
Comment 6 meridionaljet 2016-02-16 12:29 EST
Created attachment 1127642 [details]
Output of "systemd-analyze dump"
Comment 7 meridionaljet 2016-02-16 12:31:10 EST
(In reply to Lukáš Nykrýn from comment #5)
> If you run systemctl daemon-reexec does it decrease the amount of allocated
> memory?
> Can you also attach output of systemd-analyze dump?

Running systemctl daemon-reexec does release all of the used RAM. The question is whether the leak will continue. It has persisted through reboots before. Does the result of this command provide any insight into the cause of the leak?

I've attached the output of "systemd-analyze dump" prior to issueing the daemon-reexec command.
Comment 8 meridionaljet 2016-02-16 12:47 EST
Created attachment 1127656 [details]
Output of atop during server high load

Here's another example of abnormal behavior of systemd.

I've attached the output of atop during a period when the server was under high load due to production tasks. These tasks involve downloading, reading, and writing lots of data on the /home partition. However, a huge fraction of the disk I/O is taking place in the root partition (LVM centos-root on the left-hand side of the output), which should not be the case. This is coupled with atop showing systemd being responsible for the majority of the disk usage, presumably taking place in that root partition. This is another example of what seems like abnormal behavior.
Comment 10 Lukáš Nykrýn 2016-02-17 06:14:04 EST
Would you be willing to try a test build? We found one memory-leak.

https://people.redhat.com/lnykryn/systemd/bz1308780/
Comment 11 meridionaljet 2016-02-17 10:10:36 EST
(In reply to Lukáš Nykrýn from comment #10)
> Would you be willing to try a test build? We found one memory-leak.
> 
> https://people.redhat.com/lnykryn/systemd/bz1308780/

Well this is a live web server so I'm a little wary. Is this leak you found capable of leaking 200 MB/day as I have observed?
Comment 12 Lukáš Nykrýn 2016-02-17 10:26:09 EST
I am sorry, but I don't know. I will try to find some artificial reproducer and try the fix myself.
Comment 13 info 2016-09-21 18:53:18 EDT
I am observing this memory leak on my ubuntu xenial server.  Willing to give you whatever information you want and try whatever you have to fix it.
Comment 14 Lukáš Nykrýn 2016-09-22 03:10:34 EDT
This problem should be fixed in systemd-219-30. If anyone is willing to try that, we have a repo with test builds here: https://copr.fedorainfracloud.org/coprs/lnykryn/systemd-rhel-staging/
Comment 15 Benjamin Lefoul 2016-11-06 13:36:17 EST
(In reply to Lukáš Nykrýn from comment #14)
> This problem should be fixed in systemd-219-30. If anyone is willing to try
> that, we have a repo with test builds here:
> https://copr.fedorainfracloud.org/coprs/lnykryn/systemd-rhel-staging/

Hi Lukáš,

We are experiencing this problem on a production server. Does 219-30 fix it? I see its available in RHEL 7.3 beta.
Comment 16 Lukáš Nykrýn 2016-11-07 02:58:29 EST
If I am not mistaken 7.3 should be out now. So you can try the latest version there.
Comment 17 Benjamin Lefoul 2016-11-07 04:46:04 EST
(In reply to Lukáš Nykrýn from comment #16)
> If I am not mistaken 7.3 should be out now. So you can try the latest
> version there.

Indeed. I will give an update here as soon as it shows up in the CentOS repository.
Comment 18 SHAURYA 2016-11-28 08:09 EST
Created attachment 1225281 [details]
systemd-analyze_dump

Note You need to log in before you can comment on or make changes to this bug.