Bug 432903 - /etc/security/limits.conf should reduce the risk of forkbombing
Summary: /etc/security/limits.conf should reduce the risk of forkbombing
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Fedora
Classification: Fedora
Component: pam
Version: rawhide
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Tomas Mraz
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2008-02-15 01:00 UTC by Chris Snook
Modified: 2019-10-17 02:42 UTC (History)
20 users (show)

Fixed In Version: pam-0.99.10.0-1.fc9
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2008-02-15 18:23:21 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Chris Snook 2008-02-15 01:00:14 UTC
Description of problem:
The formula used by the kernel to calculate the default value of RLIMIT_NPROC
(equivalent to ulimit -u) is only intended to ensure that the kernel will be
able to boot completely on systems with very small amounts of memory.  It is not
intended to protect the system against a forkbomb.  The kernel community has for
many years refused to implement many suggested protections against forkbomb
attacks, since it would be difficult to do so while preserving the
policy/mechanism separation.  The kernel already provides a mechanism to prevent
forkbomb attacks (setrlimit), and it is the responsibility of userspace to take
advantage of it.

Currently, /etc/security/limits.conf does not set any stricter limits than the
very loose limits permitted by the kernel.  It has often been argued that
setting a default limit across the distribution would be very difficult to get
right for both large and small systems.  This is certainly true for hard limits,
but soft limits are much more flexible.

Version-Release number of selected component (if applicable):
pam-0.99.8.1-18.fc9.x86_64

How reproducible:
100%

Steps to Reproduce:
1. start with a Core 2 Duo, 2 GB RAM
2. install F9 alpha, with 2GB swap
3. log in as unprivileged user (root has no process limit)
4. run 'ulimit -u', observe that the process limit is approximately 16000 for a
2GB machine
5. unpack a large source tarball, such as the kernel
6. run 'make -j'

Actual results:
System load climbs over 300 and %sys time rises over 90% before the system
becomes unresponsive to anything except pings and sysrq.  This is not a kernel
bug, because the system is still making glacial progress, so it would be
inappropriate to invoke the OOM-killer.  The system does not recover unless the
job is killed with sysrq-i or sysrq-k.

Expected results:
Unprivileged user doesn't bring down the box with a few keystrokes.

Additional info:
Setting a soft limit of 1024 threads with 'ulimit -S -u 1024' allowed the system
to recover in a few minutes.  In some cases the 'make -j' jobs even completed
successfully.

Setting a soft limit of 1024 would not override a lower kernel-calculated hard
limit on systems with very low memory, so it doesn't introduce any new problems
there.  Applications that expect to launch thousands of threads already require
that /etc/security/limits.conf be configured as part of their installation
procedure anyway.

The obvious argument against this is that it doesn't prevent a malicious user
from raising their soft limit and forkbombing the system.  This is true in the
very limited case of shell access, but forkbombing is the least of worries in
that scenario.  There are, however, many circumstances in which user errors or
application bugs can cause an inadvertent forkbomb, or a malicious attacker
could induce a forkbomb without shell access, for example by invoking a cgi
script via http.  This is the reason soft limits exist, and we should use them
as they were intended.

Comment 1 Chris Snook 2008-02-15 01:02:41 UTC
The fix for this bug is *one line*, and it's in a configuration file, so
updating an existing system won't break an installed application, unlike changes
to the kernel which would break installed applications with extremely high
thread counts.

Comment 2 Tomas Mraz 2008-02-15 11:14:33 UTC
I'm OK with adding such limit to default PAM configuration.
So do you think the value of 1024 is a good all-round soft limit for Fedora?


Comment 3 Chris Snook 2008-02-15 16:53:02 UTC
For typical desktop use, 256 would probably be fine, but it might annoy
power-users, and cause problems for highly-threaded java servlets that aren't so
highly-threaded to warrant limits.conf configuration currently.  With 1024, my
high-end desktop crawls for 10 minutes before recovering from fat-fingering
'make -j' instead of 'make -j2', so I think that limit would protect a server
while still allowing most reasonable workloads, and they're the systems at the
greatest risk.  HPC workloads on boxes with many CPUs generally shouldn't be
impacted, because they try to use as few threads per CPU as possible.

If we're going to pick a magic number, 1024 already has precedent for the open
files limit.  Oracle recommends a setting of 4096 for some of their
highly-threaded applications, so if 1024 annoys a bunch of server users, we can
throw desktop users under the bus and raise it to 4096.  My goal is to have a
default setting that as few people as possible need to tweak.

Comment 4 Tomas Mraz 2008-02-15 17:01:47 UTC
OK, let's start with 1024 in rawhide.


Comment 5 acount closed by user 2009-05-07 21:27:42 UTC
on Fedora 10 _without_ the 90-nproc.conf file, "open files" already is 1024 !

$ ulimit -Hn
1024

for users and and also for root.


is it still necesary ?

-thanks-

Comment 6 Tomas Mraz 2009-05-08 07:28:03 UTC
Seems that it is not necessary anymore, but it is harmless to have it there.

Comment 7 acount closed by user 2009-05-08 19:44:35 UTC
(In reply to comment #6)

> Seems that it is not necessary anymore, but it is harmless to have it there.

the better remove it. 

In the future this limit can be risen/fallen and this file (90-nproc.conf) can obstruct upstream changes.


-thanks-


regards,

Comment 8 Tomas Mraz 2009-09-01 15:27:54 UTC
Oops I've overlooked this mistake in comment 5 - this bug is about number of user's processes soft limit and not about number of open files. So this setting in 90-nproc.conf should be kept as is.

Comment 9 Ricardo Arguello 2011-08-17 05:59:28 UTC
There should be some documentation about this setting for Apache admins going nuts about their MaxClients setting being ignored!

In fact, some admins belief that adding an nproc setting for the "apache" user in the /etc/security/limits.conf will override the 90-nproc.conf setting, but in fact httpd is started as root, and then every process is setuid to the apache user.

Comment 10 Joseph Shraibman 2012-05-18 21:25:47 UTC
Whose idea was it to put the limit in 90-nproc.conf instead of limits.conf?  I never knew 90-nproc.conf existed, and I was trying to figure out why my limits.conf settings weren't working.

Comment 11 Ali Akcaagac 2013-01-16 12:48:44 UTC
I had issues with that myself today.

I run Freemind (mind mapping program based on java). My mindmap has around 3700 nodes. This worked fine so far (Using it together with Freemind and an iPhone mind mapping software).

The problem I had was this:

- From one second to another I wasn't able to spawn new processes. Bash
  complained with fork() == -1 or something and spitted processes not forkable
  error messages

- After some googling I figured out that it shall be a good idea to decrease
  stack size for a per java thread. Tried this without luck.

- Later on I figured out that java itself (and this mind mapping tool) spawns
  own processes for each node (from what I think it may happen)

  Running:

  while [ 1 ] ; do sleep 2 ; ps uH p `pidof java` | wc -l ; done

  Told me that this java application tried to spawn up to 15000 processes.

- I even tried to set ulimit -u to 4096 (sadly it's not possible to increase
  that one from user perspective).

- I then switched to root and entered ulimit -u 20000 and ran the java
  application as root -> no issues

- Later on I figured out the 90-nproc.conf file and thought to comment here.

- Please be aware that 1024 user processes are a lot to less in case you deal
  with java applications. This seem to be a common java related problem and
  reported permanently on different sites.

Comment 12 Tomas Mraz 2013-01-16 12:57:42 UTC
It's weird that ulimit -u 4096 didn't work for you as user. The 90-nproc.conf sets only the soft limit.

Comment 13 Ali Akcaagac 2013-01-16 13:32:17 UTC
(In reply to comment #12)
> It's weird that ulimit -u 4096 didn't work for you as user. The
> 90-nproc.conf sets only the soft limit.

No, that's not what I have written.

> I even tried to set ulimit -u to 4096 (sadly it's not possible to increase
> that one from user perspective).

Means x > 4096. I can set 4096 just fine. But no higher values. For example it's not possible to write 'ulimit -u 8192' as user. Even if that was the upper limit. The applications on java side sometimes need to spawn 20k processes.

The point is this:

To protect the system from being nuked by process bombs you have limited the max user processes to 1024 (in worst case 4096).

So far so good.

BUT:

You can write a small program in java that recursively spawn new threads. This otoh will lock your entire system. Say you can only spawn 1024 processes. A user does so only to cause some evil things and voila, the system becomes unresponsive and can not spawn anything.

This sure is not the solution. The problem that I had with freemind was that it tried swap all memory with default ulimit -u 1024 and thus caused some bigger sideeffects. Now with ulimit -u 50000 I don't have that "memory" issue anymore that gets caused by java. No swapping of memory, no slowdown of the entire system.

Anyways regardless how we turn this but 1024 still is too low for java applications.

Comment 14 Tomas Mraz 2013-01-16 17:02:29 UTC
Note that the 4096 limit does not come from /etc/security/90-nproc.conf and it is probably a kernel default for your amount of RAM? Otherwise I do not know where this value is coming from.

I will not change the default 1024 setting - it is very fine for most (even Java) applications.

Also note that the limit is applied per-user so an evil user with 1024 processes will not affect spawning processes of another user.

Comment 15 Richard Neill 2013-10-27 14:56:59 UTC
Can I just say I've been bitten by this limit, which I think is too low for a modern desktop machine. This is particularly so given that the chromium web browser runs each tab in a separate process, so it's very easy to accumulate a couple of hundred processes.

Admittedly I'm a "power-user" (I run 3 monitors and 4 virtual desktops, and I make full use of 16 GB of RAM, and I'm the single user of this machine), but nevertheless, I wasn't doing anything particularly unusual. 

Finding the root cause of why I had random failures (cron jobs failing, or browser plugins sometimes not working) took me a long time. Even more annoying, there doesn't seem to be a way for root to raise the ulimit without the user logging out.

On the other hand, forkbomb protection isn't really very useful to me. (though a better way to protect against forkbombs might be to limit the maximum rate at which a process can spawn children once it exceeds say 50 children).

Thanks for your time.

Comment 16 Anssi Hannula 2013-12-15 01:52:29 UTC
Just for the record, I've been also hitting this limit on a desktop machine (this is the first time I managed to figure out the source of the strange problems similar to what Richard had experienced).

I got this list of the process names with most threads (similar processes combined):
$ ps mx -o comm= | awk '/^-$/ { cnt[comm]++; next } { comm=$1; }  END { for (i in cnt) print i": "cnt[i] }' | sort -rnk2
plugin-containe: 374
chrome: 212
firefox: 48
kwrite: 47
thunderbird: 38
steam: 38
dropbox: 25
opera:libflashp: 23
bash: 22
okular: 16
VBoxSVC: 15
opera: 11
soffice.bin: 8
kactivitymanage: 8
smplayer: 6
services.exe: 5
konqueror: 5
gvfsd-fuse: 5
[... clipped ~50 other user process names covering additional ~140 threads ...]

It seems like the main culprits are chromium and flash player. Each chromium tab consumes approx. 4 threads (I have ~40 tabs open). Each flash player applet consumes approx. 10 threads (includes browsers' plugin handling threads etc). Firefox also has ~30 tabs open, so that explains the value for Firefox, and I have about ~20 kwrite editors open (each has ~2 threads), etc.

I guess I can accept that this may not be such a normal workload (or that one should simply not use flash player...), but it seems like it is not that far-fetched that a user might have 50-100 chromium tabs open (I know I often have much more than that) and that a few dozen of those might be Flash video players, after which not too many additional threads from other processes are needed to break the limit.


(In reply to Richard Neill from comment #15)
> Even more
> annoying, there doesn't seem to be a way for root to raise the ulimit
> without the user logging out.

Actually there is, I just ran "prlimit --nproc=unlimited:unlimited -p $PID" for all my processes instead of logging out.

Comment 17 Robin Stocker 2014-02-04 12:58:04 UTC
I've also been bitten by this, the limit is too low for developers which use Java application servers and browsers.

Comment 18 Charlie Vaske 2014-02-21 20:15:26 UTC
This change has caused our group to spend quite a bit of time trying to figure out where it was getting set. On any modern hardware, 1024 processes are not a problem in the least. I often launch more than 1000 processes without issue.

The real problem is that make -j without an argument launches too many simultaneous disk accesses. It's a problem with make -j's default behavior. Trying to limit the number of processes is just an ugly hack to try to fake proper I/O limits.

Changing system-wide ulimit default to fix make's poor defaults will cause many more problems than for the occasional fat-finger make call, especially as desktop machines get more beefy. There's no excuse for letting 1024 limit be anywhere near a server machine.

IMHO, this seems to be a case where a misdirected developer pet-peeve is resulting in severely handicapping default system performance. I am also a developer, so I support changing make -j's defaults, but I am vehemently letting this change last one more OS revision.

Comment 19 Jeremy Nickurak 2014-02-21 20:25:42 UTC
What's being done to evaluate the correctness of this value on an ongoing basis?

Chrome/chromium can *easily* have a few hundred threads just by itself.

Without anything else doing crazy multi-process:

$ ps xH | wc -l
490

Comment 20 Thomas Vander Stichele 2014-02-27 16:45:22 UTC
Finally traced down my many problems to this same limit.

Same here - using chrome, and with only about 25 tabs open right now I still have:

[root@otto ~]# echo chrome t; ps -Lf -u 1000 | grep chrome | wc; echo chrome p; ps auxw -u 1000 | grep chrome | wc; echo total t; ps -Lf -u 1000 |wc
chrome t
    459    9585  413858
chrome p
     52    1159   45330
total t
    963   16572  480267

so fairly close to the system limit for number of processes.

I don't remember from years ago whether the NPROC limit used to only limit processes or also their threads - but with fairly normal use of chrome it's easy to consume half of those 1024 open threads/processes and run into this limit.

I would definitely suggest to increase the default limit to 2048 (and then deal with this bug again in aprox 1.5 years) or even higher.

Comment 21 Tomas Mraz 2014-02-27 16:57:01 UTC
I've already raised it to 4096 in Rawhide.

Comment 22 Luke Hutchison 2014-06-01 09:45:32 UTC
In the last couple of releases of Chrome, the average number of v8 threads per tab increased pretty dramatically (possibly something to do with the garbage collector). Chrome now becomes unusable at about 1/8th of the number of tabs that I used to be able to have open, due to hitting the process limit. I hope that 4096 is enough, but I doubt it for the case of tab junkies. (Unfortunately Chrome doesn't tell the user what is wrong either, things just start not working.)

Comment 23 Dirk Muders 2014-06-11 13:55:47 UTC
We were hit by this limit while testing a telescope control system which needs much more than 1024 processes/threads. The strange behavior of threads first led us to modify the default thread stack size limits which seemed to help. Only when we ran some load tests and saw fork issues we made the connection to the number of processes. The obscure location of the limit configuration file made it difficult to track down where the 1024 was set. Overall this caused several weeks of iterations. I am thus very much in favor of increasing or releasing the limit and rather working on improving the behavior of "make -j". The defaults for normal machines these days are of the order 50,000 to 100,000 and for large servers up to several 100,000. Anything below 10,000 seems too conservative.

Comment 24 Need Real Name 2014-10-31 23:51:54 UTC
A few comments/observations:

- On my Fedora 19 and 20 systems, I still show the 1024 limit - is this held up in rawhide? (per comment 21)

- I am fine with the design of the limits configuration files and limits.d directory ... many other packages use this convention and I believe it makes packaging easier and also better encapsulates configuration

- I think everyone needs to overcome the fascination with the 'make -j' example. That was only an easy-to-reproduce example of the problem, not an indication of the main issue that is being addressed. The main issue is that systems are needlessly vulnerable to fork-bombs and other process denial of service attacks / user misadventures. 

I am a java developer and chrome and firefox abuser, so I hit this problem easily at 1024. I agree there should be a more sophisticated solution, but I think a setting in the range of 4096-10240 is a reasonable compromise. A more sophisticated solution might be to permit a whitelist for specified executables, so that java and google-chrome and firefox processes can go nuts without the user having to find a magic file somewhere.

Comment 25 Tomas Mraz 2014-11-03 09:09:13 UTC
(In reply to Need Real Name from comment #24)
> A few comments/observations:
> 
> - On my Fedora 19 and 20 systems, I still show the 1024 limit - is this held
> up in rawhide? (per comment 21)

Yes, this is changed to 4096 in Fedora 21 and rawhide only.

> I am a java developer and chrome and firefox abuser, so I hit this problem
> easily at 1024. I agree there should be a more sophisticated solution, but I
> think a setting in the range of 4096-10240 is a reasonable compromise. A
> more sophisticated solution might be to permit a whitelist for specified
> executables, so that java and google-chrome and firefox processes can go
> nuts without the user having to find a magic file somewhere.

Note that the limit set is the soft one, so it would theoretically be possible to create wrapper script that tries to raise the soft limit or the application that expects many forks can raise it itself directly in the code.

Comment 26 Luke Hutchison 2014-11-03 09:53:18 UTC
Re. soft limit, in comment #25: processes cannot be expected to try to raise soft limits by themselves. Programs that don't start a lot of processes or threads should not have to raise a soft limit just so they can start up, and if programs that do start a lot of processes or threads are always expected to try to raise the soft limit themselves, then what's the point of even having a soft limit in the first place? Additionally, if an unprivileged process can simply raise the soft limit itself, a forkbomb can raise the soft limit too all the way up to the hard limit, so it is only non-forkbomb software (i.e. legitimate software that the user intended to run) that will be negatively impacted by an artificially-imposed soft limit. The soft limit should simply be set to the hard limit. 

Chrome can easily consume all 4096 processes itself with lots of tabs open, and the system remains responsive, *until the limit is hit*, then the system is brought to its knees *by the limit being imposed*, because processes can no longer start. So actually, imposing a low process limit is much more damaging to system performance in real world scenarios than having a higher limit.

The number 4096 seems arbitrarily-chosen, and in my opinion is still too low. Since the stated reason for the low limit is to prevent forkbombing, has anybody run tests to show just how many forkbomb processes need to be created before the system starts struggling under the load on a modern machine with a modern kernel?

Comment 27 Gerrit Slomma 2014-11-04 20:06:05 UTC
Was affected at one of my servers, running 4 JBoss EAP6 as well as one java-app talking to connected smartcard-readers on Red Hat Enterprise Linux 6.
Took a lot of time to figure out, why one JBoss won't restart with the error-message: "out of memory error: can' create native thread".
I hit the 1024-userproc limit.
The old server still running Red Hat Enterprise Linux 5 are running fine with about 3k threads/processes.
The value should surely by calculated regarding to the hardware, on a 2 socket 12 core system (48 virtual cores) with 128 GB RAM as well as SSD as disk the setting should be a bit more liberal than 1024 (RHEL6) or 4096 (RHEL7).

Comment 28 Michel Lind 2014-11-28 04:00:09 UTC
(In reply to Gerrit Slomma from comment #27)
> Was affected at one of my servers, running 4 JBoss EAP6 as well as one
> java-app talking to connected smartcard-readers on Red Hat Enterprise Linux
> 6.
> Took a lot of time to figure out, why one JBoss won't restart with the
> error-message: "out of memory error: can' create native thread".
> I hit the 1024-userproc limit.
> The old server still running Red Hat Enterprise Linux 5 are running fine
> with about 3k threads/processes.
> The value should surely by calculated regarding to the hardware, on a 2
> socket 12 core system (48 virtual cores) with 128 GB RAM as well as SSD as
> disk the setting should be a bit more liberal than 1024 (RHEL6) or 4096
> (RHEL7).

I agree, I've been hitting this limit recently simply by having Chrome running (with about 20-30 tabs open plus several web apps running in separate windows).

Comment 29 Perry Lovill 2015-01-01 02:42:29 UTC
Part of the reason why this issue is so contentious is that the u/r/limit in question is now being used in a way that's inconsistent with the history of Unix variants' process limit values, be they NPROC, maxuproc, maxuprc, or anything else related to limiting the number of *processes* running (be it system-wide or per-user).  What's inconsistent is:  in the era of threads/LWP's, a single value to control *both* top-level PID's and underlying *threads* (TIDS) is no longer adequate, IMHO.  We should instead throttle the TID count indepently of the PID count rather than the current schema of counting them all equally under the same bucket.

If we were to have an RLIMIT_NTHRD_PER_UPROC and/or an RLIMIT_NTHRD_PER_UID, in addition to RLIMIT_NPROC, we could control the total count of threads/LWP's per-user, and/or the total count of threads/LWP's per-user-PID -- IN ADDITION TO the total count of top=level PID's per UID.  In my 30+ years as a *NIX SysAdmin, by far the most common "forkbomb" situation has been scripting by novice end-users or developers where the shell script *accidentally* calls itself recursively.  For shell scripts, the normal fork mechanism used would not created multiple threads under a single PID, but instead would create multiple top-level PID's under that user's UID.  So, if we were to count the top-level PID's separately from the thread's each PID creates, then we could limit the most-common unintentiion forkbomb situations, while not interfering with the normal functioning of modern-day multi-tab'ed browsers and multi-threaded JVM's where 1000's of TID's per PID is really not uncommon.

Just my $0.02

Comment 30 Øyvind Stegard 2015-08-04 12:31:52 UTC
Perry L., your two cents are spot on.

This *low* default limit (even if soft) just caused lots of grievance and service downtime on our RHEL6 servers running Java applications. Total thread count happened to go beyond 1024 (at peak traffic) and the JVMs (all running with the same UID) started complaining while the services went down. We were not aware of this change (compared to RHEL5). Such a default value is not suitable for heavily multithreaded processes.

Comment 31 Sam Thursfield 2015-09-22 09:40:40 UTC
I hit this bug recently: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=750687

It causes `git status` to become a fork bomb if the filesystem permissions inside a Git submodule are wrong.

As I was running as 'root' in a chroot, the resource limit didn't actually kick in (root has 'nproc' set to 'unlimited') and my machine froze, several times before I worked out what was happening.

But, I am really glad Fedora resource limits try to prevent this. If I'd been running as a normal user (which I should have been doing) it would have saved lots of wasted time.

Comment 32 Sam Thursfield 2015-09-22 09:44:53 UTC
Seems https://bugzilla.redhat.com/show_bug.cgi?id=754285 is also related. Sorry for commenting on a closed bug -- this is the only one that I could find in bugzilla.redhad.com when searching for 'fork bomb' ... and I only found this one via text in bug 1189860 !


Note You need to log in before you can comment on or make changes to this bug.