Bug 2143562 - container build push jobs are failing with 'error while loading shared libraries: libc.so.6: cannot change memory protections'
Summary: container build push jobs are failing with 'error while loading shared librar...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-selinux
Version: 17.1 (Wallaby)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Julie Pichon
QA Contact: nlevinki
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-11-17 08:27 UTC by svyas
Modified: 2022-11-21 14:38 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-21 14:38:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-20252 0 None None None 2022-11-17 08:35:07 UTC

Comment 2 Julie Pichon 2022-11-17 13:26:21 UTC
A SELinux error was shared on IRC. I think the usual SELinux debugging advice of reproducing the error in Permissive mode applies (to have the complete audit logs, and to confirm that we can indeed fix this with SELinux.) From the log file shared, the process that caused the error seems to match the error from the description: 

    /bin/sh -c if [ -f "/etc/yum.repos.d/ubi.repo" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf 

It would be good to also show the labels (with ls -lZ) for /etc/yum.repos.d and /var/cache/dnf. If we have access to another machine where this still works, comparing the labels may be a good idea. But the fact that this is coming up as a libc error is concerning, I wonder if other container-related things may have changed as well.

Comment 5 Julie Pichon 2022-11-17 17:51:34 UTC
The yum changes may also be of interest if that could have affected the labels, considering the commands that fail:

-dnf.noarch                                       4.10.0-5.el9_0                                @System                       
-dnf-data.noarch                                  4.10.0-5.el9_0                                @System                       
-dnf-plugins-core.noarch                          4.0.24-4.el9_0                                @System                       
-libdnf.x86_64                                    0.65.0-5.1.el9_0                              @rhosp-rhel-9.0-baseos        
-yum.noarch                                       4.10.0-5.el9_0                                @System                       
-yum-utils.noarch                                 4.0.24-4.el9_0                                @System                       

+dnf.noarch                                       4.12.0-4.el9                                  @osp-trunk-deps               
+dnf-data.noarch                                  4.12.0-4.el9                                  @osp-trunk-deps               
+dnf-plugins-core.noarch                          4.1.0-3.el9                                   @osp-trunk-deps
+libdnf.x86_64                                    0.67.0-3.el9                                  @osp-trunk-deps  
+yum.noarch                                       4.12.0-4.el9                                  @osp-trunk-deps               
+yum-utils.noarch                                 4.1.0-3.el9                                   @osp-trunk-deps       

It would be good to get the output of:

$ ls -lZ /etc/yum.repos.d
$ ls -lZ /var/cache/dnf 

For reference, the initial denial looks like this:

type=AVC msg=audit(1668694804.354:3347): avc:  denied  { read } for  pid=39472 comm="sh" path="/usr/lib64/libc.so.6" dev="vda4" ino=251679942 scontext=system_u:system_r:container_t:s0:c334,c724 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=1

However, neither of the directories mentioned in the commands that fail seem to have a var_lib_t context, comparing with a 9 environment...

Looking at podman issues, "libc.so.6: cannot change memory protections" appears to come up regularly. One was recently reopened [1] and mentions the wrong context on local container storage ($HOME/.local/share/containers), among other things. Sometimes it can be caused by container-selinux not installing properly. It may be worthwhile reinstalling the container-selinux package manually on the machine and making sure no error is visible in the dnf output (those errors are silent). Then run "seinfo --type | grep container" to confirm again that the correct container types are installed.

[1] https://github.com/containers/podman/issues/10817

Comment 8 Julie Pichon 2022-11-18 10:08:43 UTC
We did some debugging together with Sandeep and the issue is with the latest container-selinux package from osp-trunk-deps, which fails to install properly. This can be checked with "$ seinfo --type | grep container": if this only returns 3 types, then the rpm was not installed correctly. (There should be a dozen.)

Trying to reinstall the rpm fails with the following error:

container-selinux.noarch                         3:2.189.0-1.el9
  Running scriptlet:                                                                                                                                                                              1/2
libsemanage.semanage_pipe_data: Child process /usr/libexec/selinux/hll/pp failed with code: 255. (No such file or directory).                                                                                                                
container: libsepol.policydb_read: policydb module version 21 does not match my version range 4-20
container: libsepol.sepol_module_package_read: invalid module in module package (at section 0)
container: Failed to read policy package
libsemanage.semanage_direct_commit: Failed to compile hll files into cil files.
 (No such file or directory).
/usr/sbin/semodule:  Failed!

Downgrading the package completed successfully, and we were able to build a container after that. It seems like we'll want to exclude the newer version until the compatibility issues are sorted out (this is not uncommon when selinux-policy updates, and is likely happening because of the mixed content.)

We probably want to exclude container-selinux-3:2.189.0-1.el9.noarch. Our test worked with container-selinux-3:2.188.0-1.el9_0.noarch.

Comment 10 Alan Pevec 2022-11-18 17:41:49 UTC
> until the compatibility issues are sorted out (this is not uncommon when selinux-policy updates, and is likely happening because of the mixed content.)

should we move this bz to RHEL/container-selinux ?
Moving to RHOS/openstack-selinux for now, to be closer to the expert area.

Comment 12 Julie Pichon 2022-11-21 09:21:47 UTC
(In reply to Alan Pevec from comment #10)
> > until the compatibility issues are sorted out (this is not uncommon when selinux-policy updates, and is likely happening because of the mixed content.)
> 
> should we move this bz to RHEL/container-selinux ?

No. I think this is due to the mixed environment because of this message:

container: libsepol.policydb_read: policydb module version 21 does not match my version range 4-20

It seems like container-selinux was built against a more recent version of selinux-policy. We got the newer container-selinux (that works on 9.1) but not the newer SELinux libraries (still on 9.0) so it couldn't install. I think the fix will be to test again when the entire environment is on 9.1 and confirm the newer container-selinux works fine in that context. I'm not sure if we want to keep this bz open until then, or if it's fine to close it since there are no further action needed now that the repository bringing in newer content was removed.

Comment 13 Alan Pevec 2022-11-21 14:38:36 UTC
thanks Julie, I'll close this bz, we're tracking move to 9.1 elsewhere


Note You need to log in before you can comment on or make changes to this bug.