Description of problem: I just applied updates to my system via dnf. cronie wasn't included, but one of the updates restarted crond. I subsequently found that none of my cron jobs were running. From the journal (you can see it was restarted twice during the updates, and the second time is where it broke): Aug 02 12:03:31 localhost.example.com systemd[1]: Stopping Command Scheduler... Aug 02 12:03:31 localhost.example.com crond[1051173]: (CRON) INFO (Shutting down) Aug 02 12:03:31 localhost.example.com systemd[1]: crond.service: Succeeded. Aug 02 12:03:31 localhost.example.com systemd[1]: Stopped Command Scheduler. Aug 02 12:03:31 localhost.example.com systemd[1]: crond.service: Consumed 51min 14.374s CPU time. Aug 02 12:03:31 localhost.example.com systemd[1]: Started Command Scheduler. Aug 02 12:03:31 localhost.example.com crond[2830396]: (CRON) STARTUP (1.5.4) Aug 02 12:03:31 localhost.example.com crond[2830396]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 64% if used.) Aug 02 12:03:32 localhost.example.com crond[2830396]: (CRON) INFO (running with inotify support) Aug 02 12:03:32 localhost.example.com crond[2830396]: (CRON) INFO (@reboot jobs will be run at computer's startup.) Aug 02 12:05:15 localhost.example.com systemd[1]: Stopping Command Scheduler... Aug 02 12:05:15 localhost.example.com crond[2830396]: (CRON) INFO (Shutting down) Aug 02 12:05:15 localhost.example.com systemd[1]: crond.service: Succeeded. Aug 02 12:05:15 localhost.example.com systemd[1]: Stopped Command Scheduler. Aug 02 12:05:15 localhost.example.com systemd[1]: Started Command Scheduler. Aug 02 12:05:15 localhost.example.com crond[2830770]: (CRON) STARTUP (1.5.4) Aug 02 12:05:15 localhost.example.com crond[2830770]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 54% if used.) Aug 02 12:05:15 localhost.example.com crond[2830770]: ((null)) No SELinux security context (/etc/crontab) Aug 02 12:05:15 localhost.example.com crond[2830770]: (root) FAILED (loading cron table) Aug 02 12:05:15 localhost.example.com crond[2830770]: ((null)) No SELinux security context (/etc/cron.d/mythtv) Aug 02 12:05:15 localhost.example.com crond[2830770]: (root) FAILED (loading cron table) Aug 02 12:05:15 localhost.example.com crond[2830770]: ((null)) No SELinux security context (/etc/cron.d/cacti) Aug 02 12:05:15 localhost.example.com crond[2830770]: (root) FAILED (loading cron table) Aug 02 12:05:15 localhost.example.com crond[2830770]: ((null)) No SELinux security context (/etc/cron.d/0hourly) Aug 02 12:05:15 localhost.example.com crond[2830770]: (root) FAILED (loading cron table) Aug 02 12:05:16 localhost.example.com crond[2830770]: (CRON) INFO (running with inotify support) Aug 02 12:05:16 localhost.example.com crond[2830770]: (CRON) INFO (@reboot jobs will be run at computer's startup.) There's nothing wrong with the SELinux contexts on those files: [root@localhost ~]# ls -lZ /etc/crontab /etc/cron.d/mythtv /etc/cron.d/cacti /etc/cron.d/0hourly -rw-r--r--. 1 root root system_u:object_r:system_cron_spool_t:s0 128 Jul 24 2019 /etc/cron.d/0hourly -rw-r--r--. 1 root root system_u:object_r:system_cron_spool_t:s0 77 Jul 24 14:03 /etc/cron.d/cacti -rw-r--r--. 1 root root unconfined_u:object_r:system_cron_spool_t:s0 43 Aug 1 2010 /etc/cron.d/mythtv -rw-r--r--. 1 root root system_u:object_r:system_cron_spool_t:s0 451 Jul 24 2019 /etc/crontab [root@localhost ~]# restorecon -v /etc/crontab /etc/cron.d/mythtv /etc/cron.d/cacti /etc/cron.d/0hourly [...no output] This is what I installed at the time: [root@localhost ~]# dnf history info 1550 Transaction ID : 1550 Begin time : Sun 02 Aug 2020 12:03:02 BST Begin rpmdb : 3101:3aa5b568bb9d25ab158437c1bc0f022d6aa1c99a End time : Sun 02 Aug 2020 12:06:23 BST (201 seconds) End rpmdb : 3101:556ea727dafcb5b067a3e292890ca7f07ec4064a User : Russell Odom <russ> Return-Code : Success Releasever : 31 Command Line : update --exclude=exim* Comment : Packages Altered: Upgrade clamtk-6.04-1.fc31.noarch @updates Upgraded clamtk-6.03-1.fc31.noarch @@System Upgrade cmake-filesystem-3.17.3-4.fc31.x86_64 @updates Upgraded cmake-filesystem-3.17.3-3.fc31.x86_64 @@System Upgrade container-selinux-2:2.142.0-1.fc31.noarch @updates Upgraded container-selinux-2:2.138.0-1.fc31.noarch @@System Upgrade evolution-data-server-3.34.4-2.fc31.x86_64 @updates Upgraded evolution-data-server-3.34.4-1.fc31.x86_64 @@System Upgrade evolution-data-server-langpacks-3.34.4-2.fc31.noarch @updates Upgraded evolution-data-server-langpacks-3.34.4-1.fc31.noarch @@System Upgrade freerdp-libs-2:2.2.0-1.fc31.x86_64 @updates Upgraded freerdp-libs-2:2.1.2-1.fc31.x86_64 @@System Upgrade glibc-2.30-13.fc31.i686 @updates Upgraded glibc-2.30-11.fc31.i686 @@System Upgrade glibc-2.30-13.fc31.x86_64 @updates Upgraded glibc-2.30-11.fc31.x86_64 @@System Upgrade glibc-all-langpacks-2.30-13.fc31.x86_64 @updates Upgraded glibc-all-langpacks-2.30-11.fc31.x86_64 @@System Upgrade glibc-common-2.30-13.fc31.x86_64 @updates Upgraded glibc-common-2.30-11.fc31.x86_64 @@System Upgrade glibc-devel-2.30-13.fc31.x86_64 @updates Upgraded glibc-devel-2.30-11.fc31.x86_64 @@System Upgrade glibc-headers-2.30-13.fc31.x86_64 @updates Upgraded glibc-headers-2.30-11.fc31.x86_64 @@System Upgrade glibc-langpack-en-2.30-13.fc31.x86_64 @updates Upgraded glibc-langpack-en-2.30-11.fc31.x86_64 @@System Upgrade libnsl-2.30-13.fc31.x86_64 @updates Upgraded libnsl-2.30-11.fc31.x86_64 @@System Upgrade libwinpr-2:2.2.0-1.fc31.x86_64 @updates Upgraded libwinpr-2:2.1.2-1.fc31.x86_64 @@System Upgrade nspr-4.26.0-1.fc31.x86_64 @updates Upgraded nspr-4.25.0-1.fc31.x86_64 @@System Upgrade nss-3.54.0-1.fc31.x86_64 @updates Upgraded nss-3.53.0-2.fc31.x86_64 @@System Upgrade nss-softokn-3.54.0-1.fc31.x86_64 @updates Upgraded nss-softokn-3.53.0-2.fc31.x86_64 @@System Upgrade nss-softokn-freebl-3.54.0-1.fc31.x86_64 @updates Upgraded nss-softokn-freebl-3.53.0-2.fc31.x86_64 @@System Upgrade nss-sysinit-3.54.0-1.fc31.x86_64 @updates Upgraded nss-sysinit-3.53.0-2.fc31.x86_64 @@System Upgrade nss-tools-3.54.0-1.fc31.x86_64 @updates Upgraded nss-tools-3.53.0-2.fc31.x86_64 @@System Upgrade nss-util-3.54.0-1.fc31.x86_64 @updates Upgraded nss-util-3.53.0-2.fc31.x86_64 @@System Upgrade podman-2:2.0.3-1.fc31.x86_64 @updates Upgraded podman-2:2.0.2-1.fc31.x86_64 @@System Upgrade podman-docker-2:2.0.3-1.fc31.noarch @updates Upgraded podman-docker-2:2.0.2-1.fc31.noarch @@System Upgrade podman-plugins-2:2.0.3-1.fc31.x86_64 @updates Upgraded podman-plugins-2:2.0.2-1.fc31.x86_64 @@System Upgrade python3-regex-2020.7.14-1.fc31.x86_64 @updates Upgraded python3-regex-2020.6.8-2.fc31.x86_64 @@System Scriptlet output: 1 /var/tmp/rpm-tmp.zTde7h: line 5: /sbin/sln: No such file or directory 2 /var/tmp/rpm-tmp.ocXE4l: line 5: /sbin/sln: No such file or directory I looked at what installed between 12:03:31 and 12:05:15 to see if that would identify the culprit, but that didn't narrow it down. However, looking at the dependencies of cronie, I strongly suspect it's the glibc update that's the cause (feel free to reassign the bug if appropriate). I tried to downgrade glibc, but I'm only offered 2.30-5; 2.30-11 isn't available any more so I've decided not to try it for now. I can do so if anyone thinks it will be useful and won't break anything :-) Version-Release number of selected component (if applicable): cronie-1.5.4-2.fc31.x86_64 How reproducible: Same errors and behaviour every time I restart crond. Steps to Reproduce: 1. Apply updates via dnf as above. Actual results: crond does not load at least some of the expected cron jobs on start, and does not execute them when it should. Expected results: crond behaves as it should. Additional info: I tried to workaround with `setenforce 0; systemctl restart crond; sleep 2; setenforce 1`. This allows it to load the files on start: Aug 02 20:41:01 localhost.example.com crond[2971856]: ((null)) No security context but SELinux in permissive mode, continuing (/etc/crontab) Aug 02 20:41:02 localhost.example.com crond[2971856]: ((null)) No security context but SELinux in permissive mode, continuing (/etc/cron.d/mythtv) Aug 02 20:41:02 localhost.example.com crond[2971856]: ((null)) No security context but SELinux in permissive mode, continuing (/etc/cron.d/cacti) Aug 02 20:41:02 localhost.example.com crond[2971856]: ((null)) No security context but SELinux in permissive mode, continuing (/etc/cron.d/0hourly) However, it can't run the items on schedule (cacti poller in this case, /etc/cron.d/cacti): Aug 02 20:45:01 localhost.example.com crond[2972252]: (*system*) NULL security context for user () Aug 02 20:45:01 localhost.example.com crond[2972252]: (apache) ERROR (failed to change SELinux context) If I disable SELinux with `setenforce 0` then everything runs as it should, so that's where I'm at now (obviously this is not ideal!).
I'm facing the same issue with my F31 installation starting with updates applied on 2020-08-02. In addition to the measures listed in the bug description I've triggered the relabeling of the whole file system. But the problem remains.
Hi, I have the same issue. Has anyone found a solution besides setting enforcement to permissive?
Hi. Same issue here in F31. I've just realized now that many jobs were not running and it all started with the updates at the beginning of August. I did a whole filesystem relabel but didn't work. None of my jobs in /etc/cron.daily are running due to "crond[3749]: ((null)) No SELinux security context (/etc/cron.d/0hourly)".
The problem doesn't occur on Fedora 32.
Is the problem still visible if you remove the container-selinux package? Recently, we had similar SELinux problems with crond and they were caused by container-selinux package.
On Fedora 32 crond runs correctly with container-selinux package installed. On Fedora 31 crond still has error even with container-selinux package removed. Started Command Scheduler. (CRON) STARTUP (1.5.4) (CRON) INFO (RANDOM_DELAY will be scaled with factor 67% if used.) ((null)) No SELinux security context (/etc/crontab) (root) FAILED (loading cron table) ((null)) No SELinux security context (/etc/cron.d/0hourly) (root) FAILED (loading cron table) ((null)) No SELinux security context (/etc/cron.d/atop) (root) FAILED (loading cron table) (CRON) INFO (running with inotify support) (CRON) INFO (@reboot jobs will be run at computer's startup.)
On F31 I removed container-selinux, did `setenforce 1` and restarted crond, and it now appears to work - I don't get the "No SELinux security context" errors (contrary to comment 6). However, running without that package breaks my containers, so that's not sustainable for me - I've put it back how I was before.
Hi. Same behavior as Russell. F31 here; removed container-selinux, setenforce 1 and restarted crond: no SELinux errors in journal for cron. Installed container-selinux back again and SELinux errors returned after restarting crond. Can't remove container-selinux as well so back to permissive mode...
Still an issue with container-selinux-2.145.0-1.fc31 Since we've now determined container-selinux is likely at fault, I'll just call out from the original report that this broke between version 2.138.0-1 and 2.142.0-1. Changes: https://github.com/containers/container-selinux/compare/v2.138.0...v2.142.0 This one entitled "Allow cron jobs to run podman" sounds suspect: https://github.com/containers/container-selinux/commit/965c7fb488ccec2c623d1b71e665f70c8ef3db11 (fixes https://github.com/containers/container-selinux/issues/100). I've logged https://github.com/containers/container-selinux/issues/106 in the hope it can be addressed upstream.
This seems to be in fact a bug in libselinux, which is fixed by the following upstream patch: https://github.com/SELinuxProject/selinux/commit/1f89c4e7879fcf6da5d8d1b025dcc03371f30fc9 (Well, the bug is actually in the kernel, but in a deprecated API element, which the above patch move libselinux away from.) The problematic libselinux call that crond is choking at can be simulated using this command: python3 -c "import selinux; print(selinux.get_default_context_with_rolelevel('system_u', 'system_r', None, 'system_u:system_r:crond_t:s0-s0:c0.c1023'))" Normally, that should print "[0, 'system_u:system_r:system_cronjob_t:s0-s0:c0.c1023']", but if there are too many transitions in the policy of a certain pattern (probably happened after the container-selinux update) + old libselinux used, the underlying kernel operation overflows the available buffer and errors out, leading to the command printing "[-1, None]". This can be reproduced on F32 by installing container-selinux, downgrading libselinux to libselinux-3.0-3, and running the above command. Between libselinux-3.0-3 and -4, the aforementioned patch was backported, so the latest libselinux on F32 works correctly. To fix the issue on F31, the same patch needs to be backported to F31's libselinux.
I'd like to add more details about this issue. F31 version of selinux.get_default_context_with_rolelevel() uses selinux.security_compute_user() which uses /sys/fs/selinux/user API to get the set of user contexts that can be reached from a source context. You can use python in order to get some results: ~~~ import selinux selinux.security_compute_user('system_u:system_r:crond_t:s0-s0:c0.c1023', 'system_u') ~~~ returns 83 records - 3690 bytes - when used with container-selinux-2.138.0, but the same command fails with container-selinux-2.144.0 as the size of result is bigger than a kernel page size - 4k To lower the size of the expected result, you can drop MCS/MLS part from source context: ~~~ import selinux selinux.security_compute_user('system_u:system_r:crond_t:s0', 'system_u') ~~~ Using this code with old and new container policy, you can get a list of added contexts: + 'system_u:system_r:container_runtime_t:s0' + 'system_u:system_r:spc_t:s0' + 'system_u:system_r:container_userns_t:s0' + 'system_u:system_r:container_logreader_t:s0' + 'system_u:system_r:container_kvm_t:s0' + 'system_u:system_r:container_init_t:s0' + 'system_u:system_r:container_engine_t:s0' + 'system_u:system_r:container_t:s0' + 'system_u:unconfined_r:container_runtime_t:s0' + 'system_u:unconfined_r:unconfined_t:s0' + 'system_u:unconfined_r:container_t:s0' 'system_u:system_r:container_runtime_t:s0' is related to the commit found Russel - https://github.com/containers/container-selinux/commit/965c7fb488ccec2c623d1b71e665f70c8ef3db11, but this commit itself would not overflow the kernel size. Other contexts are related to https://github.com/containers/container-selinux/commit/2750e78542a36bfffc97701183b839c8417e77aa as crond_t is assigned to unconfined_domain_type attribute and container_* types are assigned to container_domain attribute. It's definitely libselinux problem (already fixed upstream and in F32), which was uncovered by raising number of reachable domains.
https://src.fedoraproject.org/rpms/libselinux/pull-request/17
FEDORA-2020-ad7446b3fc has been submitted as an update to Fedora 31. https://bodhi.fedoraproject.org/updates/FEDORA-2020-ad7446b3fc
FEDORA-2020-ad7446b3fc has been pushed to the Fedora 31 testing repository. In short time you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2020-ad7446b3fc` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2020-ad7446b3fc See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.
Grabbed upates from updates-testing repo and and now everything went back to normal here in F31. No more SELinux messages in the log and crond running its jobs correctly. Thanks.
Also confirming this now looks good on F31. Thanks!
FEDORA-2020-ad7446b3fc has been pushed to the Fedora 31 stable repository. If problem still persists, please make note of it in this bug report.