Bug 1687694 (CVE-2019-9705) - CVE-2019-9705 vixie-cron: memory consumption DoS via a large crontab file
Summary: CVE-2019-9705 vixie-cron: memory consumption DoS via a large crontab file
Keywords:
Status: CLOSED ERRATA
Alias: CVE-2019-9705
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Red Hat Product Security
QA Contact:
URL:
Whiteboard:
Depends On: 1711356 1711367 1711368
Blocks: 1687709
TreeView+ depends on / blocked
 
Reported: 2019-03-12 07:30 UTC by Dhananjay Arunesh
Modified: 2021-10-27 03:26 UTC (History)
1 user (show)

Fixed In Version: cronie 1.5.3
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-10-27 03:26:58 UTC
Embargoed:


Attachments (Terms of Use)

Description Dhananjay Arunesh 2019-03-12 07:30:33 UTC
Vixie Cron before the 3.0pl1-133 Debian package allows local users to cause a denial of service (memory consumption) via a large crontab file because an unlimited number of lines is accepted.

Upstream commit:
https://salsa.debian.org/debian/cron/commit/26814a26

Comment 1 Tomas Mraz 2019-03-15 13:54:49 UTC
In cronie handled in 1.5.3 release.

Comment 3 Riccardo Schirone 2019-05-17 15:06:39 UTC
Created cronie tracking bugs for this issue:

Affects: fedora-all [bug 1711356]

Comment 5 Marcel Plch 2019-06-26 07:51:54 UTC
There seems to be a use case for *not* limiting the amount of cronjobs.

Aniket, may I ask you to explain the use case as you explained to me via e-mail?

Thank you.

Comment 6 Tomas Mraz 2019-06-26 08:16:22 UTC
There always should be a limit. The only question is how high it should be.

Comment 7 Marcel Plch 2019-06-26 08:21:06 UTC
Maybe it would be wise to set the limit dynamically based on system resources? Or at least load it from some config file?
Large mainframes should be expected to run more jobs than a personal workstation.

Comment 8 Tomas Mraz 2019-06-26 08:34:12 UTC
I do not think we want to introduce a config file just for this or complicate it by system resources estimation. At most I can see adding a new option but that still does not make much sense to me. The limit is per-crontab and affects only the user's crontab so what would be the maximum needed if the current limit of 1000 is not sufficient. 10000? 50000? I do not think anything higher is appropriate at all.

Comment 9 Marcel Plch 2019-06-26 08:38:17 UTC
Let's wait for the information from Aniket and decide according to that, then.

Comment 10 Aniket Bhavsar 2019-06-26 19:05:51 UTC
Below is the use case as per customer:

~~~~
Our applications were running on a Solaris platform. And from time to time, the limit of 100 jobs were reached (c queue max run limit reached). In that case, after a few minutes, the job in queue is dropped.
Currently, we don't have any issues with these applications on a Linux 7.3 platform because we already have reduced the number of jobs and the frequency of execution. But because of business, the number of jobs is increasing.
So we are wondering about the behaviour of Linux system...
~~~~

Comment 11 Tomas Mraz 2019-06-27 06:20:58 UTC
Is this really related to user cron job table limit? The current upstream limit is 1000 different user cron jobs (as created by crontab command) for a single user so that is quite far from 100.

However the Solaris case mentioned above does not really talk about cron jobs.

Comment 12 Aniket Bhavsar 2019-06-28 19:39:11 UTC
It more like 'How many simultaneous jobs at the time the daemon crond can execute?'.

Comment 13 Tomas Mraz 2019-07-01 07:46:29 UTC
There is no artificial limit for that.

Comment 14 Marcel Plch 2019-07-03 14:44:07 UTC
Aniket, are you able to inform the customer that we plan to fix this CVE by setting the upstream limit of 1000 jobs?
Also note that this limit is set per-user and does not apply to root jobs.

Please, also ask for feedback whether this is alright for the customer. If they move their crontab configuration to a root's crontab, they shouldn't have to worry about the limit, the question is if this approach is feasible for them.

Thank you.

Comment 17 Aniket Bhavsar 2019-08-14 04:15:06 UTC
Apology for the dealy. There was no response from the customer the case is closed now.


Note You need to log in before you can comment on or make changes to this bug.