RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1873064 - Can not execute podman commands scheduled by cron
Summary: Can not execute podman commands scheduled by cron
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: container-selinux
Version: 8.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: Jindrich Novy
QA Contact: Edward Shen
URL:
Whiteboard:
Depends On:
Blocks: 1851085
TreeView+ depends on / blocked
 
Reported: 2020-08-27 10:07 UTC by Juan Badia Payno
Modified: 2021-02-16 14:22 UTC (History)
10 users (show)

Fixed In Version: container-selinux-2.144.0-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-16 14:21:45 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github containers container-selinux issues 100 0 None closed Podman exec does not work in system cronjobs 2021-02-05 11:41:15 UTC

Description Juan Badia Payno 2020-08-27 10:07:43 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Jindrich Novy 2020-08-27 10:11:57 UTC
Hi Juan,

please file the bug description properly together with reproduction steps.

Thanks!
Jindrich

Comment 3 Daniel Walsh 2020-08-27 10:55:30 UTC
Have you tried this against the upstream Podman?

Comment 4 Juan Badia Payno 2020-08-27 11:18:24 UTC
(In reply to Daniel Walsh from comment #3)
> Have you tried this against the upstream Podman?

Nope, I havent

Comment 5 Tom Sweeney 2020-08-27 15:53:21 UTC
Juan, 
If you can, please try with upstream.  I know there were a number of changes to conmon that may account for this.  Also, can you let us know the output from `podman --version` please?

Comment 6 Juan Badia Payno 2020-08-28 10:26:58 UTC
(In reply to Tom Sweeney from comment #5)
> Juan, 
> If you can, please try with upstream.  

I need some guide here.... not sure how to proceed.

> I know there were a number of changes
> to conmon that may account for this.  Also, can you let us know the output
> from `podman --version` please?

$ podman --version
podman version 1.6.4

Comment 9 Juan Badia Payno 2020-08-31 10:17:24 UTC
Thanks for the downstream version

But the new version does not fix the issue.

[stack@undercloud-0 tmp]$ podman --version 
podman version 2.0.5

A piece of the output
        "Error: [conmon:d]: exec with attach is waiting for start message from parent",
        "[conmon:d]: exec with attach got start message from parent",
        "time=\"2020-08-31T06:03:06-04:00\" level=error msg=\"exec failed: container_linux.go:349: starting container process caused \\\"permission denied\\\"\"",
        "exec failed: container_linux.go:349: starting container process caused \"permission denied\": OCI runtime permission denied error"


I installed the following packages, and restart the server:
podman-2.0.5-1.el8.x86_64.rpm 
podman-catatonit-2.0.5-1.el8.x86_64.rpm

Comment 10 Tom Sweeney 2020-08-31 14:19:00 UTC
Well darn, I was hoping that was something that had been cleaned up as I know there's been a fair amount of work done in the exec/run code recently. Will take a dive later today.

Comment 11 Tom Sweeney 2020-08-31 19:20:48 UTC
Juan a couple of questions for you.

Just verifying, the entry you're putting into crontab is for a user on the system with root privileges and not the root user, right?

Does this work if you add the crontab entry to the crontab for root?

Does cron_script.sh run on the command line (outside of cron) for the user?  Or for root?

Thanks.

Peter Hunt, does this conmon error look like a familar one?

Comment 12 Juan Badia Payno 2020-09-01 14:25:14 UTC
(In reply to Tom Sweeney from comment #11)
> Juan a couple of questions for you.
> 
> Just verifying, the entry you're putting into crontab is for a user on the
> system with root privileges and not the root user, right?
The user has sudo privileges, the cron task is an ansible task.

> 
> Does this work if you add the crontab entry to the crontab for root?
It works on crontab for the root user

> 
> Does cron_script.sh run on the command line (outside of cron) for the user? 
It does work for the stack user outside of cron.

> Or for root?
It also works for the root user outside the of cron.

> 
> Thanks.
> 
> Peter Hunt, does this conmon error look like a familar one?

Comment 13 Peter Hunt 2020-09-01 16:31:02 UTC
the error isn't from conmon. it's coming from runc. conmon is just the messenger :) 

If you run the podman command outside of ansible/cron, does it work?

another thought: does the exec work if you specify a user with `--user`? I'm not sure what podman does otherwise, but it's possible it's not choosing the right user to do the exec.

Comment 14 Juan Badia Payno 2020-09-01 17:29:18 UTC
(In reply to Peter Hunt from comment #13)
> the error isn't from conmon. it's coming from runc. conmon is just the
> messenger :) 
Let me add the whole stderr for both when it does not work and does work

DOES NOT WORK
    "stderr_lines": [
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Reading configuration file \\\"/usr/share/containers/libpod.conf\\\"\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Merged system config \\\"/usr/share/containers/libpod.conf\\\": &{{false false false false false true} 0 {   [] [] []}  docker://  runc map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] [crun runc] [crun] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] systemd   /var/run/libpod -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman []   k8s.gcr.io/pause:3.1 /pause false false  2048 shm    false}\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Using graph driver overlay\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Using run root /var/run/containers/storage\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Using tmp dir /var/run/libpod\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Set libpod namespace to \\\"\\\"\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"No store required. Not opening container store.\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Initializing event backend journald\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=warning msg=\"Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"using runtime \\\"/usr/bin/runc\\\"\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=warning msg=\"Error loading CNI config list file /etc/cni/net.d/87-podman-bridge.conflist: error parsing configuration list: unexpected end of JSON input\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Creating new exec session in container 448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2 with session id c4d2254e77219455749e77e17d04fb219307b708d921ffbe8a862ce50bef81b8\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -s -c 448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2 -u c4d2254e77219455749e77e17d04fb219307b708d921ffbe8a862ce50bef81b8 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2/userdata/c4d2254e77219455749e77e17d04fb219307b708d921ffbe8a862ce50bef81b8 -p /var/lib/containers/storage/overlay-containers/448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2/userdata/c4d2254e77219455749e77e17d04fb219307b708d921ffbe8a862ce50bef81b8/exec_pid -l k8s-file:/var/lib/containers/storage/overlay-containers/448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2/userdata/c4d2254e77219455749e77e17d04fb219307b708d921ffbe8a862ce50bef81b8/exec_log --exit-dir /var/lib/containers/storage/overlay-containers/448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2/userdata/c4d2254e77219455749e77e17d04fb219307b708d921ffbe8a862ce50bef81b8/exit --socket-dir-path /var/run/libpod/socket --log-level debug --syslog -e --exec-attach --exec-process-spec /var/lib/containers/storage/overlay-containers/448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2/userdata/c4d2254e77219455749e77e17d04fb219307b708d921ffbe8a862ce50bef81b8/exec-process-945493462]\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=info msg=\"Running conmon under slice machine.slice and unitName libpod-conmon-448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2.scope\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=warning msg=\"Failed to add conmon to systemd sandbox cgroup: Unit libpod-conmon-448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2.scope already exists.\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Attaching to container 448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2 exec session c4d2254e77219455749e77e17d04fb219307b708d921ffbe8a862ce50bef81b8\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"connecting to socket /var/run/libpod/socket/c4d2254e77219455749e77e17d04fb219307b708d921ffbe8a862ce50bef81b8/attach\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Received: 0\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=debug msg=\"Received: -1\"",
        "time=\"2020-09-01T13:09:06-04:00\" level=error msg=\"[conmon:d]: exec with attach is waiting for start message from parent\\n[conmon:d]: exec with attach got start message from parent\\ntime=\\\"2020-09-01T13:09:06-04:00\\\" level=error msg=\\\"exec failed: container_linux.go:349: starting container process caused \\\\\\\"permission denied\\\\\\\"\\\"\\nexec failed: container_linux.go:349: starting container process caused \\\"permission denied\\\": OCI runtime permission denied error\""
    ],

DOES WORK

"stderr_lines": [
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Reading configuration file \\\"/usr/share/containers/libpod.conf\\\"\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Merged system config \\\"/usr/share/containers/libpod.conf\\\": &{{false false false false false true} 0 {   [] [] []}  docker://  runc map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] [crun runc] [crun] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] systemd   /var/run/libpod -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman []   k8s.gcr.io/pause:3.1 /pause false false  2048 shm    false}\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Using graph driver overlay\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Using run root /var/run/containers/storage\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Using tmp dir /var/run/libpod\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Set libpod namespace to \\\"\\\"\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"No store required. Not opening container store.\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Initializing event backend journald\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=warning msg=\"Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"using runtime \\\"/usr/bin/runc\\\"\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=warning msg=\"Error loading CNI config list file /etc/cni/net.d/87-podman-bridge.conflist: error parsing configuration list: unexpected end of JSON input\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Creating new exec session in container 448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2 with session id 1e834af851101202e85c48053ba9c5a508ebc6f971dc52f49aed9bf3668baadb\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -s -c 448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2 -u 1e834af851101202e85c48053ba9c5a508ebc6f971dc52f49aed9bf3668baadb -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2/userdata/1e834af851101202e85c48053ba9c5a508ebc6f971dc52f49aed9bf3668baadb -p /var/lib/containers/storage/overlay-containers/448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2/userdata/1e834af851101202e85c48053ba9c5a508ebc6f971dc52f49aed9bf3668baadb/exec_pid -l k8s-file:/var/lib/containers/storage/overlay-containers/448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2/userdata/1e834af851101202e85c48053ba9c5a508ebc6f971dc52f49aed9bf3668baadb/exec_log --exit-dir /var/lib/containers/storage/overlay-containers/448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2/userdata/1e834af851101202e85c48053ba9c5a508ebc6f971dc52f49aed9bf3668baadb/exit --socket-dir-path /var/run/libpod/socket --log-level debug --syslog -e --exec-attach --exec-process-spec /var/lib/containers/storage/overlay-containers/448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2/userdata/1e834af851101202e85c48053ba9c5a508ebc6f971dc52f49aed9bf3668baadb/exec-process-027857989]\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=info msg=\"Running conmon under slice machine.slice and unitName libpod-conmon-448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2.scope\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=warning msg=\"Failed to add conmon to systemd sandbox cgroup: Unit libpod-conmon-448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2.scope already exists.\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Attaching to container 448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2 exec session 1e834af851101202e85c48053ba9c5a508ebc6f971dc52f49aed9bf3668baadb\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"connecting to socket /var/run/libpod/socket/1e834af851101202e85c48053ba9c5a508ebc6f971dc52f49aed9bf3668baadb/attach\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Received: 0\"",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Received: 304536\"",
        "[conmon:d]: exec with attach is waiting for start message from parent",
        "[conmon:d]: exec with attach got start message from parent",
        "time=\"2020-09-01T13:14:37-04:00\" level=debug msg=\"Successfully started exec session 1e834af851101202e85c48053ba9c5a508ebc6f971dc52f49aed9bf3668baadb in container 448110b04b2e9d2df82a64d10a27b082435e3556c201b08e2981958ee4b10ff2\""
    ]
> 
> If you run the podman command outside of ansible/cron, does it work? 
I think most of the possibilities were answered on the comment#12

Executing the task directly with podman on the cron task does not work. Same error to me.
time="2020-09-01T13:21:01-04:00" level=debug msg="connecting to socket /var/run/libpod/socket/dfd679fa16ba604b20c8eb36a4b3ee6e68a1564091faeda3d47b94ea94a61896/attach"
time="2020-09-01T13:21:01-04:00" level=debug msg="Received: 0"
time="2020-09-01T13:21:02-04:00" level=debug msg="Received: -1"
time="2020-09-01T13:21:02-04:00" level=error msg="[conmon:d]: exec with attach is waiting for start message from parent\n[conmon:d]: exec with attach got start message from parent\ntime=\"2020-09-01T13:21:02-04:00\" level=error msg=\"exec failed: container_linux.go:349: starting container process caused \\\"permission denied\\\"\"\nexec failed: container_linux.go:349: starting container process caused \"permission denied\": OCI runtime permission denied error"

> 
> another thought: does the exec work if you specify a user with `--user`? I'm
> not sure what podman does otherwise, but it's possible it's not choosing the
> right user to do the exec.

well, the point is that the owner is root.
[stack@undercloud-0 tmp]$ podman ps -a 
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES

[stack@undercloud-0 tmp]$ sudo podman ps -a | grep mysql 
51066b9815e5  undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-mariadb:16.1_20200730.1                    bash -ec if [ -d ...  2 weeks ago  Exited (0) 2 weeks ago         mysql_neutron_db_rename
448110b04b2e  undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-mariadb:16.1_20200730.1                    kolla_start           2 weeks ago  Up 2 weeks ago                 mysql
fc59b94b74bb  undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-mariadb:16.1_20200730.1                    bash -ec if [ -e ...  2 weeks ago  Exited (0) 2 weeks ago         mysql_bootstrap
8847fc32b757  undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-mariadb:16.1_20200730.1                    /bin/bash -c chow...  2 weeks ago  Exited (0) 2 weeks ago         mysql_init_logs

Comment 15 Tom Sweeney 2020-09-01 19:32:53 UTC
Juan thanks a bunch for all the answers and feedback.

Dan Walsh or Giuseppe, any thoughts on why this wouldn't run under cron for a user, but would run from the command line for the user?  I thought cron used the same userspace...

Comment 18 Daniel Walsh 2020-09-02 11:26:11 UTC
I need the AVC's to figure out what SELinux does not like.  Most likely is the cron job is not transitioning the user to unconfined_t, and is running as some cronjob type, which is not allowed to transition to container_t.

Comment 19 Juan Badia Payno 2020-09-02 14:08:37 UTC
(In reply to Daniel Walsh from comment #18)
> I need the AVC's to figure out what SELinux does not like.  Most likely is
> the cron job is not transitioning the user to unconfined_t, and is running
> as some cronjob type, which is not allowed to transition to container_t.

My apologies as I haven't realized that the comment was private.
If something else is needed, I will need some assistance.

[stack@undercloud-0 ~]$ sudo journalctl -t setroubleshoot --since=07:37
-- Logs begin at Fri 2020-08-21 12:28:19 UTC, end at Wed 2020-08-26 07:39:25 UTC. --
Aug 26 07:38:07 undercloud-0.redhat.local setroubleshoot[403485]: SELinux is preventing / from using the transition access on a process. For complete SELinux messages run: sealert -l 89caeb53-5da6-4c7f-a6ae-df91b1424b65

[stack@undercloud-0 ~]$ sealert -l 89caeb53-5da6-4c7f-a6ae-df91b1424b65
SELinux is preventing / from using the transition access on a process.

*****  Plugin restorecon_source (99.5 confidence) suggests   *****************

If you want to fix the label. 
/ default label should be default_t.
Then you can run restorecon.
Do
# /sbin/restorecon -v /

*****  Plugin catchall (1.49 confidence) suggests   **************************

If you believe that  should be allowed transition access on processes labeled container_t by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'runc:[2:INIT]' --raw | audit2allow -M my-runc2INIT
# semodule -X 300 -i my-runc2INIT.pp


Additional Information:
Source Context                system_u:system_r:system_cronjob_t:s0
Target Context                system_u:system_r:container_t:s0:c152,c699
Target Objects                /usr/bin/bash [ process ]
Source                        runc:[2:INIT]
Source Path                   /
Port                          <Unknown>
Host                          undercloud-0.redhat.local
Source RPM Packages           filesystem-3.8-2.el8.x86_64
Target RPM Packages           bash-4.4.19-10.el8.x86_64
Policy RPM                    selinux-policy-3.14.3-20.el8.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     undercloud-0.redhat.local
Platform                      Linux undercloud-0.redhat.local
                              4.18.0-147.24.2.el8_1.x86_64 #1 SMP Tue Jul 21
                              14:11:32 UTC 2020 x86_64 x86_64
Alert Count                   353
First Seen                    2020-08-25 07:51:07 UTC
Last Seen                     2020-08-26 07:38:06 UTC
Local ID                      89caeb53-5da6-4c7f-a6ae-df91b1424b65

Raw Audit Messages
type=AVC msg=audit(1598427486.532:116383): avc:  denied  { transition } for  pid=403461 comm="runc:[2:INIT]" path="/usr/bin/bash" dev="overlay" ino=48785 scontext=system_u:system_r:system_cronjob_t:s0 tcontext=system_u:system_r:container_t:s0:c152,c699 tclass=process permissive=0


type=SYSCALL msg=audit(1598427486.532:116383): arch=x86_64 syscall=execve success=no exit=EACCES a0=c0001764d0 a1=c0000f3e60 a2=c0000e6300 a3=0 items=0 ppid=403450 pid=403461 auid=1001 uid=42434 gid=42434 euid=42434 suid=42434 fsuid=42434 egid=42434 sgid=42434 fsgid=42434 tty=(none) ses=245 comm=runc:[2:INIT] exe=/ subj=system_u:system_r:system_cronjob_t:s0 key=(null)

Hash: runc:[2:INIT],system_cronjob_t,container_t,process,transition


After the following commands everything worked as expected.

[stack@undercloud-0 ~]$ sudo ausearch -c 'runc:[2:INIT]' --raw | audit2allow -M my-runc2INIT
[stack@undercloud-0 ~]$ sudo semodule -X 300 -i my-runc2INIT.pp

Now it worked,

$ cat my-runc2INIT.te 

module my-runc2INIT 1.0;

require {
	type container_t;
	type system_cronjob_t;
	class process transition;
}

#============= system_cronjob_t ==============
allow system_cronjob_t container_t:process transition;

Comment 20 Daniel Walsh 2020-09-04 11:50:37 UTC
Looks like there was a fix for this in container-selinux 1.4.0


commit 965c7fb488ccec2c623d1b71e665f70c8ef3db11 (tag: v2.140.0)
Author: Daniel J Walsh <dwalsh>
Date:   Thu Jul 23 14:13:47 2020 -0400

    Allow cron jobs to run podman
    
    Signed-off-by: Daniel J Walsh <dwalsh>

Comment 21 Daniel Walsh 2020-09-15 20:26:32 UTC
Should be fixed in RHEL8.3 release of containers-selinux

Comment 22 Tom Sweeney 2020-09-16 17:51:54 UTC
Setting to Post and assigning to Jindrich for any Packaging needs.

Comment 31 errata-xmlrpc 2021-02-16 14:21:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0531


Note You need to log in before you can comment on or make changes to this bug.