Bug 1478966
Summary: | foreman-selinux is conflicting with container-selinux | ||
---|---|---|---|
Product: | Red Hat Satellite | Reporter: | Stanislav Tkachenko <stkachen> |
Component: | SELinux | Assignee: | Daniel Lobato Garcia <dlobatog> |
Status: | CLOSED ERRATA | QA Contact: | Lukas Pramuk <lpramuk> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 6.3.0 | CC: | achadha, adahms, bbuckingham, bkearney, dlobatog, ehelms, jcallaha, jhunsaker, katello-qa-list, lpramuk, lzap, rplevka, tdaianov |
Target Milestone: | Unspecified | Keywords: | AutomationBlocker, PrioBumpQA, Triaged |
Target Release: | Unused | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
URL: | http://projects.theforeman.org/issues/18284 | ||
Whiteboard: | |||
Fixed In Version: | foreman-selinux-1.15.6.2,foreman-selinux-1.15.6.2-1 | Doc Type: | Known Issue |
Doc Text: |
There is a conflict between SELinux modules, container-selinux and foreman-selinux. This is caused by the redeclaration of docker_port_t. This prevents containers being started, both from Satellite and manually.
To work around this issue, set foreman_t to permissive by running: `semanage permissive foreman_t`
If this occurs during the 6.3 Beta and blocks your completion or evaluation of the Beta please contact Satellite Engineering via the beta email list for assistance.
|
Story Points: | --- |
Clone Of: | 1414821 | Environment: | |
Last Closed: | 2018-02-21 16:54:17 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1414821 | ||
Bug Blocks: | 1186913 |
Comment 2
Satellite Program
2017-08-07 16:15:40 UTC
Upstream bug assigned to dlobatog *** Bug 1478142 has been marked as a duplicate of this bug. *** Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/18284 has been resolved. FAILEDQA since sat6.3.0-20 I think this should be FailedQA as this fix is probably causing foreman unable to perform "test connection" while adding a docker compute resource: #audit.log: type=AVC msg=audit(1509529091.800:4278): avc: denied { name_connect } for pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket type=SYSCALL msg=audit(1509529091.800:4278): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f3908845570 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null) type=AVC msg=audit(1509529091.801:4279): avc: denied { name_connect } for pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket type=SYSCALL msg=audit(1509529091.801:4279): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f391788f620 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null) type=AVC msg=audit(1509529091.802:4280): avc: denied { name_connect } for pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket type=SYSCALL msg=audit(1509529091.802:4280): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f391788d640 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null) type=AVC msg=audit(1509529091.802:4281): avc: denied { name_connect } for pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket type=SYSCALL msg=audit(1509529091.802:4281): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f390ef3f528 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null) type=AVC msg=audit(1509529091.812:4282): avc: denied { name_connect } for pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket type=SYSCALL msg=audit(1509529091.812:4282): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f3909304100 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null) foreman shows the "permission denied" error notification and overall is not able to use the CR. I actually don't have a direct proof this regression has been caused by this fix, however it seems to be pretty much the only BZ related to selinux and docker that appeared in the failed build. Roman, you don't have container-selinux package installed, install it and reload Foreman policy, then it will work. We can't do anything about it, the docker port is owned by this package and we cannot add an allow rule without this dependency. We cannot make it a hard RPM dependency because it is in RHEL extras repo, that would mean we need to change requirements. Let me start new thread about this how installer team want to approach this. If you can, please confirm it works with the package installed. (In reply to Lukas Zapletal from comment #7) > Roman, > > you don't have container-selinux package installed, install it and reload > Foreman policy, then it will work. > > We can't do anything about it, the docker port is owned by this package and > we cannot add an allow rule without this dependency. > > We cannot make it a hard RPM dependency because it is in RHEL extras repo, > that would mean we need to change requirements. Let me start new thread > about this how installer team want to approach this. > > If you can, please confirm it works with the package installed. Thanks for suggestions. The package actually seems to be installed already: # yum list container-selinux Installed Packages container-selinux.noarch 2:2.28-1.git85ce147.el7 @rhel-7-server-extras-rpms - not sure, how to reload the policy, but i tried the following with the following result: # semodule -r foreman libsemanage.semanage_direct_remove_key: Removing last foreman module (no other foreman module exists at another priority). Failed to resolve typeattributeset statement at /etc/selinux/targeted/tmp/modules/400/katello/cil:4 I worked with SELinux team (lvrabec) on this and a patch was created upstream: https://github.com/theforeman/foreman-selinux/pull/72 QA GA: Workaround, put foreman_t into permissive: semanage permissive foreman_t foreman-selinux-1.15.5-1.el7sat.noarch This has been already fixed and shipped. VERIFIED. @Satellite 6.3.0 Snap27 foreman-selinux-1.15.5-1.el7sat.noarch by manual reproducer in comment#0: 1. Install container-selinux # yum install container-selinux # semodule -l | grep -e container -e docker container 2.33.0 2. Due to open BZ #1527052 assign manually tcp port to container_port_t label # semanage port -a -t container_port_t -p tcp 2375 # semanage port -l |grep -e container -e docker container_port_t tcp 2375 3. Install foreman-selinux # yum install foreman-selinux ... Running transaction Installing : foreman-selinux-1.15.5-1.el7sat.noarch 1/1 Verifying : foreman-selinux-1.15.5-1.el7sat.noarch 1/1 Installed: foreman-selinux.noarch 0:1.15.5-1.el7sat Complete! >>> no warning/error message during rpm installation # semanage fcontext -l | grep foreman /etc/foreman(/.*)? all files system_u:object_r:etc_t:s0 /etc/puppet/node.rb regular file system_u:object_r:foreman_enc_t:s0 /var/lib/foreman(/.*)? all files system_u:object_r:foreman_lib_t:s0 /var/log/foreman(/.*)? all files system_u:object_r:foreman_log_t:s0 /var/run/foreman(/.*)? all files system_u:object_r:foreman_var_run_t:s0 /usr/share/foreman/.ssh(/.*)? all files system_u:object_r:ssh_home_t:s0 /var/lib/foreman/db/(.*.sqlite3)? all files system_u:object_r:foreman_db_t:s0 /etc/puppetlabs/puppet/node.rb regular file system_u:object_r:foreman_enc_t:s0 /usr/share/foreman/config/hooks(/.*)? all files system_u:object_r:bin_t:s0 /usr/share/gems/gems/foreman-tasks-.*/bin/foreman-tasks regular file system_u:object_r:foreman_tasks_exec_t:s0 /opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-.*/bin/foreman-tasks regular file system_u:object_r:foreman_tasks_exec_t:s0 /usr/share/foreman/extras/noVNC/websockify\.py all files system_u:object_r:websockify_exec_t:s0 >>> foreman selinux labels are present => module loaded correctly, no conflict occurred FailedQA. @Satellite 6.3.0 Snap30 foreman-selinux-1.15.5-1.el7sat.noarch Though there is no conflict between module I have to fail this bz again. The original cause of the conflict was the dubious definition of docker_port_t. When I try to use docker via tcp the passenger connection to docker_port_t is still denied! (see BZ #1531075) *** Bug 1531075 has been marked as a duplicate of this bug. *** VERIFIED. @Satellite 6.3.0 Snap32 foreman-selinux-1.15.6.2-1.el7sat.noarch by manual reproducer in comment#0: 1. Install container-selinux # yum install container-selinux # semodule -l | grep container container 2.33.0 2. Install foreman-selinux # yum install foreman-selinux ... Running transaction Installing : foreman-selinux-1.15.6.2-1.el7sat.noarch 1/1 Verifying : foreman-selinux-1.15.6.2-1.el7sat.noarch 1/1 Installed: foreman-selinux.noarch 0:1.15.6.2-1.el7sat Complete! >>> no warning/error message during rpm installation 3. Check foreman module is loaded correctly # semanage fcontext -l | grep foreman /etc/foreman(/.*)? all files system_u:object_r:etc_t:s0 /etc/puppet/node.rb regular file system_u:object_r:foreman_enc_t:s0 /var/lib/foreman(/.*)? all files system_u:object_r:foreman_lib_t:s0 /var/log/foreman(/.*)? all files system_u:object_r:foreman_log_t:s0 /var/run/foreman(/.*)? all files system_u:object_r:foreman_var_run_t:s0 /usr/share/foreman/.ssh(/.*)? all files system_u:object_r:ssh_home_t:s0 /var/lib/foreman/db/(.*.sqlite3)? all files system_u:object_r:foreman_db_t:s0 /etc/puppetlabs/puppet/node.rb regular file system_u:object_r:foreman_enc_t:s0 /usr/share/foreman/config/hooks(/.*)? all files system_u:object_r:bin_t:s0 /usr/share/gems/gems/foreman-tasks-.*/bin/foreman-tasks regular file system_u:object_r:foreman_tasks_exec_t:s0 /opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-.*/bin/foreman-tasks regular file system_u:object_r:foreman_tasks_exec_t:s0 /usr/share/foreman/extras/noVNC/websockify\.py all files system_u:object_r:websockify_exec_t:s0 >>> foreman selinux labels are present => module loaded correctly, no conflict occurred 4. Install Satellite and docker and check that passenger can connect to the docker port 2375/tcp @URI /compute_resources/new - fill in URL http://localhost:2375 - hit button [Test Connection] >>> Success: Test connection was successful Setting the 'requires_doc_text' flag to '-'. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA.
>
> For information on the advisory, and where to find the updated files, follow the link below.
>
> If the solution does not work for you, open a new bug report.
>
> https://access.redhat.com/errata/RHSA-2018:0336
|