Bug 1301810 - systemd performance degradation with thousands of units (systemctl times out; pid1 high CPU usage when should be idle)
systemd performance degradation with thousands of units (systemctl times out;...
Status: NEW
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: systemd (Show other bugs)
Unspecified Unspecified
medium Severity medium
: rc
: ---
Assigned To: systemd-maint
Depends On:
Blocks: 1203710 1298243 1398314 1420851 1451294
  Show dependency treegraph
Reported: 2016-01-25 21:47 EST by Ryan Sawhill
Modified: 2018-02-21 10:48 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Ryan Sawhill 2016-01-25 21:47:29 EST
Description of problem:

  One of our customers had a misconfigured instance unit: the ExecStart declaration lacked the "-" prefix. All was fine until the unit started failing. At one point, systemd was keeping track of 14,000 failed instances of the unit.

  In this situation, systemd was extremely unresponsive. It was constantly pegging the CPU and the vast majority of the time, all systemctl commands were erroring out with "Connection timed out".

  Eventually the issue was discovered and cleared up by getting a `systemctl reset-failed` command to succeed.

  Obviously instance units should use "-/usr/bin/somecommand" for their Exec.* directives; however, this brings to light a concern with systemd.

  Is there a heretofore hidden bug in the way systemd handles units ... something which can be optimized or fixed? Or are there simply limits to the number of units systemd can manage? If the latter, can engineering please provide some guidance on this?

  (NOTE: I'm not sure if this issue is specific to "failed" units; I suspect it would apply equally to large numbers of non-failed units, though I haven't tested this via a generator or something else. Also haven't looked at the code.)

Version-Release number of selected component (if applicable):

  Initially discovered on systemd-208-20.el7_1.5.x86_64
  Experienced on latest (systemd-219-19.el7.x86_64)

How reproducible:

  I've had a hard time reproducing this reliably. That is to say: I can't nail it down to something specific like "with 14,398 failed units, systemd starts failing" or even "on an otherwise idle 1 CPU system w/ 512 MiB RAM, this issue always shows up around 10,000 failed units".

  That said, on an otherwise idle 1 CPU system w/ 512 MiB RAM, running a RHEL 7.2 base (non-gui) server install with the latest bits from the CDN (as of right now), I start to see high CPU usage immediately after adding a few thousand failed units and systemctl commands start slowing down as well. I usually see my first systemctl timeouts somewhere in the 40k-60k range.

Steps to Reproduce:

  Generally speaking:
  1. Generate tons of [failed?] units
  2. Notice that PID1 pegs CPU
  3. Notice that systemctl commands take a long time or timeout
  4. Use a while/until loop to execute `systemctl reset-failed` and all is well again
  1. curl -O http://people.redhat.com/rsawhill/sysd-failtester.sh
  2. bash sysd-failtester.sh   # Runs initial setup
  3. bash sysd-failtester.sh   # Loop-creates failed instances
  4. Wait (loop breaks when first systemctl command fails)
  5. Notice that even after all sockets are closed PID1 still pegs CPU

  Take a look at reproducer script in action:
  1. https://paste.fedoraproject.org/314725/

Comment 1 Ryan Sawhill 2016-01-25 22:09:00 EST
Looks like I have a correction to make.


  > Experienced on latest (systemd-219-19.el7.x86_64)

Regarding this comment:

  > Notice that even after all sockets are closed PID1 still pegs CPU

I've noticed that this particular point is no longer the case with systemd-219-19.el7 in RHEL 7.2 -- my reproducer script led me to believe this was the case because PID1 spends a lot of CPU handling all the instance units & their sockets (the mechanism I used to generate tons of units). A little while after running my reproducer script the systemd CPU usage settles back down to nothing. 

Furthermore, while systemctl commands certainly take considerably longer than normal, they do not fail. Going back and testing RHEL 7.1 now.
Comment 2 Ryan Sawhill 2016-01-25 22:11:13 EST
> Furthermore, while systemctl commands certainly take considerably longer than normal, they do not fail.

   * They do not fail after systemd CPU usage settles back down to a normal level.
Comment 3 Ryan Sawhill 2016-04-19 14:55:50 EDT
I tested this again today on the latest systemd available (219-19.el7_2.7) and I wasn't able to clearly reproduce it. Of course systemctl still starts slowing down when there are thousands and thousands of units, but it's not nearly as dramatic as it was with systemd pre-RHEL7.2.

For the record: I ran into other problems eventually (where systemd-logind and tons of other things on the system started complaining "Argument list too long") but that was after such a crazy-high number of failed units that I don't think we need to look into it.

That said, it sure would be nice if the systemd project could put forth some official guidance for this kind of stuff. Or perhaps configure systemd to automatically trigger reset-failed when things get past a certain limit.
Comment 4 Ryan Sawhill 2016-04-19 15:03:18 EDT
PS: The "Argument list too long" stuff starts happening after 65,500 connections are made to the sysd-failtester.socket in that reproducer script posted earlier (i.e., after 65k failed units were present).

Note You need to log in before you can comment on or make changes to this bug.