RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 887931 - Keep track of per-PID/cgroup/process group fds in the PAM responder.
Summary: Keep track of per-PID/cgroup/process group fds in the PAM responder.
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: sssd
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 7.1
Assignee: SSSD Maintainers
QA Contact: Namita Soman
URL:
Whiteboard:
: 887936 887938 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-12-17 16:03 UTC by Jakub Hrozek
Modified: 2020-05-02 17:01 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-23 13:08:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github SSSD sssd issues 2612 0 None None None 2020-05-02 17:01:15 UTC

Description Jakub Hrozek 2012-12-17 16:03:45 UTC
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/sssd/ticket/1570

This is another idea that came out of an IRC discussion between me, Sumit and Simo. Simo proposed that we keep track of open fds and when the count of open file descriptors reaches a certain limit, then kill the oldest one (=the one that had been open the longest). The question is, what should be used as the key in such structure - we can group the fds by PID, UID, process group or cgroup. The limit needs to be tunable.

Full discussion follows:
{{{
7:13 < simo> jhrozek: ok closing idle connections is ok
17:13 < simo> but does not protect you from malicious or simply badly misbehaving clients
17:14 < simo> jhrozek: I think we need to protect ourselves a bit more
17:14 < simo> and that can be relatively easily done with not too great changes 
17:14 < jhrozek> well, we would reach the fd limit soon with a misbehaving application
17:14 < simo> the way to do it is to keep a per-pid list of open sockets
17:14 < simo> perhaps in a hash table
17:15 < simo> if the same pid tries to open more than X connection we simply go and kill the least used one to make space
17:15 < simo> it also means keeping a timestamp associated with the fd that marks when a connection was used last
17:15 < simo> this way we can have a tunable parm that says something like: no more than 15 connections per pid
17:16 < jhrozek> wouldn't a malicious application simply fork and open a new fd in a child?
17:16 < simo> a very bad app could still fork children though ...
17:16 < simo> actually we could have a limit per process group
17:16 < simo> or even per user
17:16 < simo> but exempt root
17:16 < simo> or maybe we should have a combination of limits
17:17 < jhrozek> I think we should keep the limits simple
17:17 < simo> per-pid + per-cgroup + per-user
17:17 < jhrozek> to make sure we're not denying legitimate access
17:17 < simo> jhrozek: you can, but then sssd_pam can be easily abused and DoSed
17:18 < simo> all it takes is a user running a bash script that forks a child and cat the pam pipe
17:18 < simo> and presto you will use all FDs
17:18 < simo> so some limits that prevents that should be put in place on the server side
17:21 < jhrozek> OK
17:22 < jhrozek> I'll write it up into a ticket, but I still think the limits should be kept simple at least for configurations's sake
17:22 < jhrozek> X allowed connections per process group is much easier to understand than a combination of factors
17:22 < simo> jhrozek: oh yeah we need to try to make them genrally not something you want to actually explicitly configure unless you have so weird setup
17:22 < simo> jhrozek: yeah we can start simple and see if that suffices
17:23 < simo> jhrozek: I would think of using cgroups though
17:23 < simo> but not sure
17:23 < simo> the problem is that it is easy to create a new process group as a user
17:23 < simo> so it won't be effective
17:23 < jhrozek> I know little about cgroups
17:23 < simo> maybe we should simply have a relatively large X per-user
17:23 < simo> like 50 per-uid
17:23 < simo> tunable
17:24 < simo> root exampted
17:24 < simo> *exempted
17:24 < simo> for now
17:24 < jhrozek> and your proposal was to keep a timestamp or keep them ordered based on time and when the limit is reached, then go and kill the last one?
17:26 < simo> jhrozek: the first one (oldest) one
17:26 < simo> I assume that a misbehaving app is simply leaking fds
17:26 < simo> so killing the oldest should be safe
17:27 < jhrozek> simo: right, that's what I meant by "last".
17:27 < simo> for active DoSs I do not care which one is killed, they are all bad connections
}}}

Comment 2 Jakub Hrozek 2012-12-17 16:14:53 UTC
*** Bug 887938 has been marked as a duplicate of this bug. ***

Comment 3 Jakub Hrozek 2012-12-17 16:14:54 UTC
*** Bug 887936 has been marked as a duplicate of this bug. ***

Comment 7 Jakub Hrozek 2016-11-23 13:08:59 UTC
Since this problem is already tracked in an upstream ticket and this bugzilla is not being planned for any immediate release either in RHEL or upstream, I'm closing this bugzilla with the resolution UPSTREAM.

Please reopen this bugzilla report if you disagree.


Note You need to log in before you can comment on or make changes to this bug.