Vixie Cron before the 3.0pl1-133 Debian package allows local users to cause a denial of service (memory consumption) via a large crontab file because an unlimited number of lines is accepted. Upstream commit: https://salsa.debian.org/debian/cron/commit/26814a26
In cronie handled in 1.5.3 release.
Created cronie tracking bugs for this issue: Affects: fedora-all [bug 1711356]
There seems to be a use case for *not* limiting the amount of cronjobs. Aniket, may I ask you to explain the use case as you explained to me via e-mail? Thank you.
There always should be a limit. The only question is how high it should be.
Maybe it would be wise to set the limit dynamically based on system resources? Or at least load it from some config file? Large mainframes should be expected to run more jobs than a personal workstation.
I do not think we want to introduce a config file just for this or complicate it by system resources estimation. At most I can see adding a new option but that still does not make much sense to me. The limit is per-crontab and affects only the user's crontab so what would be the maximum needed if the current limit of 1000 is not sufficient. 10000? 50000? I do not think anything higher is appropriate at all.
Let's wait for the information from Aniket and decide according to that, then.
Below is the use case as per customer: ~~~~ Our applications were running on a Solaris platform. And from time to time, the limit of 100 jobs were reached (c queue max run limit reached). In that case, after a few minutes, the job in queue is dropped. Currently, we don't have any issues with these applications on a Linux 7.3 platform because we already have reduced the number of jobs and the frequency of execution. But because of business, the number of jobs is increasing. So we are wondering about the behaviour of Linux system... ~~~~
Is this really related to user cron job table limit? The current upstream limit is 1000 different user cron jobs (as created by crontab command) for a single user so that is quite far from 100. However the Solaris case mentioned above does not really talk about cron jobs.
It more like 'How many simultaneous jobs at the time the daemon crond can execute?'.
There is no artificial limit for that.
Aniket, are you able to inform the customer that we plan to fix this CVE by setting the upstream limit of 1000 jobs? Also note that this limit is set per-user and does not apply to root jobs. Please, also ask for feedback whether this is alright for the customer. If they move their crontab configuration to a root's crontab, they shouldn't have to worry about the limit, the question is if this approach is feasible for them. Thank you.
Apology for the dealy. There was no response from the customer the case is closed now.