Package versions: podman-4.0.2-1.fc36.x86_64 container-selinux-2.181.0-1.fc36.noarch selinux-policy-36.5-1.fc36.noarch selinux-policy-targeted-36.5-1.fc36.noarch Backstory : https://github.com/containers/podman/issues/13684 Command : podman run -it fedora:36 /bin/bash There are 2 selinux warnings SELinux is preventing bash from 'read, write' accesses on the chr_file /dev/pts/0. ***** Plugin catchall (100. confidence) suggests ************************** If you believe that bash should be allowed read write access on the 0 chr_file by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # ausearch -c 'bash' --raw | audit2allow -M my-bash # semodule -X 300 -i my-bash.pp Additional Information: Source Context system_u:system_r:container_t:s0:c192,c578 Target Context system_u:object_r:container_file_t:s0:c192,c578 Target Objects /dev/pts/0 [ chr_file ] Source bash Source Path bash Port <Unknown> Host localhost.localdomain Source RPM Packages Target RPM Packages SELinux Policy RPM selinux-policy-targeted-36.5-1.fc36.noarch Local Policy RPM selinux-policy-targeted-36.5-1.fc36.noarch Selinux Enabled True Policy Type targeted Enforcing Mode Enforcing Host Name localhost.localdomain Platform Linux localhost.localdomain 5.17.0-300.fc36.x86_64 #1 SMP PREEMPT Wed Mar 23 22:00:40 UTC 2022 x86_64 x86_64 Alert Count 4 First Seen 2022-03-30 22:57:41 +03 Last Seen 2022-03-30 22:57:41 +03 Local ID 40185047-d059-4fae-8508-35dcb49ffd95 Raw Audit Messages type=AVC msg=audit(1648670261.413:1577): avc: denied { read write } for pid=329503 comm="bash" path="/dev/pts/0" dev="devpts" ino=3 scontext=system_u:system_r:container_t:s0:c192,c578 tcontext=system_u:object_r:container_file_t:s0:c192,c578 tclass=chr_file permissive=0 Hash: bash,container_t,container_file_t,chr_file,read,write ************************************************************************ Second warning SELinux is preventing bash from read access on the file /usr/lib64/libc.so.6. ***** Plugin restorecon (99.5 confidence) suggests ************************ If you want to fix the label. /usr/lib64/libc.so.6 default label should be lib_t. Then you can run restorecon. The access attempt may have been stopped due to insufficient permissions to access a parent directory in which case try to change the following command accordingly. Do # /sbin/restorecon -v /usr/lib64/libc.so.6 ***** Plugin catchall (1.49 confidence) suggests ************************** If you believe that bash should be allowed read access on the libc.so.6 file by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # ausearch -c 'bash' --raw | audit2allow -M my-bash # semodule -X 300 -i my-bash.pp Additional Information: Source Context system_u:system_r:container_t:s0:c192,c578 Target Context unconfined_u:object_r:data_home_t:s0 Target Objects /usr/lib64/libc.so.6 [ file ] Source bash Source Path bash Port <Unknown> Host localhost.localdomain Source RPM Packages Target RPM Packages glibc-2.35-4.fc36.x86_64 SELinux Policy RPM selinux-policy-targeted-36.5-1.fc36.noarch Local Policy RPM selinux-policy-targeted-36.5-1.fc36.noarch Selinux Enabled True Policy Type targeted Enforcing Mode Enforcing Host Name localhost.localdomain Platform Linux localhost.localdomain 5.17.0-300.fc36.x86_64 #1 SMP PREEMPT Wed Mar 23 22:00:40 UTC 2022 x86_64 x86_64 Alert Count 1 First Seen 2022-03-30 22:57:41 +03 Last Seen 2022-03-30 22:57:41 +03 Local ID c1e052a2-339a-4451-8479-8e68c8a07c64 Raw Audit Messages type=AVC msg=audit(1648670261.419:1578): avc: denied { read } for pid=329503 comm="bash" path="/usr/lib64/libc.so.6" dev="nvme0n1p3" ino=33930544 scontext=system_u:system_r:container_t:s0:c192,c578 tcontext=unconfined_u:object_r:data_home_t:s0 tclass=file permissive=0 Hash: bash,container_t,data_home_t,file,read ******************************************** What I tried to fix First commmand I tried $ restorecon -R -v $HOME/.local/share $ sudo dnf -y reinstall container-selinux $ restorecon -R -v $HOME/.local/share I also did "podman system reset" for reset everything even I delete .local/share/containers completely and try to do clean start but that doesn't even work either I always getting --- registry.fedoraproject.org/fedora:36 /bin/bash 31 hours ago Exited (127) .... It works when I do "setenforce 0" I don't what else could be broken or bug or potential issue. Thank you.
I also notice that there are some errors when I tried to re-install package as well. dnf -y reinstall container-selinux Last metadata expiration check: 0:42:23 ago on Wed 30 Mar 2022 10:28:13 PM +03. Dependencies resolved. ==================================================================================================================== Package Architecture Version Repository Size ==================================================================================================================== Reinstalling: container-selinux noarch 2:2.181.0-1.fc36 updates-testing 49 k Transaction Summary ==================================================================================================================== Total download size: 49 k Installed size: 54 k Downloading Packages: container-selinux-2.181.0-1.fc36.noarch.rpm 79 kB/s | 49 kB 00:00 -------------------------------------------------------------------------------------------------------------------- Total 35 kB/s | 49 kB 00:01 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: container-selinux-2:2.181.0-1.fc36.noarch 1/2 Reinstalling : container-selinux-2:2.181.0-1.fc36.noarch 1/2 Running scriptlet: container-selinux-2:2.181.0-1.fc36.noarch 1/2 Failed to resolve allow statement at /var/lib/selinux/targeted/tmp/modules/200/osbuild/cil:127 Failed to resolve AST /usr/sbin/semodule: Failed! Failed to resolve allow statement at /var/lib/selinux/targeted/tmp/modules/200/container/cil:1263 Failed to resolve AST semodule: Failed! Running scriptlet: container-selinux-2:2.181.0-1.fc36.noarch 2/2 Failed to resolve allow statement at /var/lib/selinux/targeted/tmp/modules/200/container/cil:1263 Failed to resolve AST semodule: Failed! Cleanup : container-selinux-2:2.181.0-1.fc36.noarch 2/2 Running scriptlet: container-selinux-2:2.181.0-1.fc36.noarch 2/2 Verifying : container-selinux-2:2.181.0-1.fc36.noarch 1/2 Verifying : container-selinux-2:2.181.0-1.fc36.noarch 2/2 Reinstalled: container-selinux-2:2.181.0-1.fc36.noarch Complete!
This is installing fine for me. Are you sure you don't have some custom package which conflicts with the container.fc descriptions?
AFAIK no, I don't have any custom package related to that. I never even installed docker as well.
I also checked that "Failed to resolve allow statement at /var/lib/selinux/targeted/tmp/modules/200/container/cil:1263" After this "var/lib/selinux/targeted/tmp" directories, /modules/200/container <-- doesn't exit
Lucas Zdenek any idea what the conflict is?
After long wait and test and trail. I removed all of the selinux configs and /var/lib/selinux/targeted/ and /var/lib/selinux/targeted/tmp/modules. After that I reinstalled all selinux packages and touch ./autorelabel && reboot commands now podman works fine. Looks like without this "/var/lib/selinux/targeted/tmp/modules/200/container" directory it works fine. I don't know how is that directory got there.
I see two independent issues. The library has an incorrect label, refer to the output of restorecon setroubleshoot plugin. The other is probably related to updated selinux=policy which also requires updates in osbuild and container, these problems are still being investigated.
*** This bug has been marked as a duplicate of bug 2070764 ***