Bug 2026289

Summary: Crond runs into segfault when getdtablesize() returns a huge value.
Product: Red Hat Enterprise Linux 7 Reporter: Flos Qi Guo <qguo>
Component: cronieAssignee: Jan Staněk <jstanek>
Status: CLOSED ERRATA QA Contact: Jan Houska <jhouska>
Severity: high Docs Contact:
Priority: urgent    
Version: 7.9CC: fkrska, jhouska, jreznik, jstanek, opohorel, rmetrich, sbalasub, sbroz
Target Milestone: rcKeywords: Triaged
Target Release: ---Flags: pm-rhel: mirror+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: cronie-1.4.11-24.el7_9 Doc Type: Bug Fix
Doc Text:
Cause: crond determined the amount of memory needed for some task based on the highest file descriptor number in use. In containers, this number could reach very high values. Consequence: In containers, crond attempted to sometimes allocate several gigabytes of memory that was not actually needed; when that failed due to system limits or limitations, the program crashed. Fix: An upstream change limiting the allocated memory size to sane(r) numbers was backported to this version of cronie. Result: The crond no longer tries to allocate so much memory that it crashes, even in containers.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-01-11 17:35:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Flos Qi Guo 2021-11-24 09:52:38 UTC
> Description of problem:
Crond eats memory and runs into segfault at last if getdtablesize() returns a huge value.

> Version-Release number of selected component (if applicable):
cronie-1.4.11-23.el7.x86_64

> How reproducible:
Frequently

> Steps to Reproduce:
1. On RHEL7.9
2. Set the nofile to 2147483584:

$ cat etc/security/limits.conf | grep nof | grep myuser
myuser            soft    nofile         2147483584
myuser            hard    nofile         2147483584

3. Set cron job with 'myuser'

> Actual results:
Crond runs into segfault from time to time.

> Expected results:
No segfault.

> Additional info:
Upstream fix can prevent allocating lots of memory:
-------------8< -------------8< -------------8< -------------
commit 584911514ce6aa2f16e1d79431bac816ea62cb2c
Author: Tomas Mraz <tmraz>
Date:   Mon Jul 8 10:57:52 2019 +0200

    getdtablesize() can return very high values in containers
    
    Avoid closing hundreds of millions descriptors or allocating
    huge arrays by maxing the fd number to MAX_CLOSE_FD.
    See rhbz#1723106

...
 /*
  * Because crontab/at files may be owned by their respective users we
  * take extreme care in opening them.  If the OS lacks the O_NOFOLLOW
diff --git a/src/popen.c b/src/popen.c
index badddb6..4397264 100644
--- a/src/popen.c
+++ b/src/popen.c
@@ -81,12 +81,19 @@ FILE *cron_popen(char *program, const char *type, struct passwd *pw, char **jobe
        if (!pids) {
                if ((fds = getdtablesize()) <= 0)
                        return (NULL);
+               if (fds > MAX_CLOSE_FD)
+                       fds = MAX_CLOSE_FD; /* avoid allocating too much memory */
                if (!(pids = (PID_T *) malloc((u_int) ((size_t)fds * sizeof (PID_T)))))
                        return (NULL);
                memset((char *) pids, 0, (size_t)fds * sizeof (PID_T));
        }
        if (pipe(pdes) < 0)
                return (NULL);
...
-------------8< -------------8< -------------8< -------------

Comment 19 errata-xmlrpc 2022-01-11 17:35:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (cronie bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:0067