Bug 1478966 - foreman-selinux is conflicting with container-selinux
foreman-selinux is conflicting with container-selinux
Status: CLOSED ERRATA
Product: Red Hat Satellite 6
Classification: Red Hat
Component: SELinux (Show other bugs)
6.3.0
Unspecified Unspecified
medium Severity medium (vote)
: GA
: --
Assigned To: Daniel Lobato Garcia
Lukas Pramuk
http://projects.theforeman.org/issues...
: AutomationBlocker, PrioBumpQA, Triaged
: 1478142 1531075 (view as bug list)
Depends On: 1414821
Blocks: 1186913
  Show dependency treegraph
 
Reported: 2017-08-07 11:15 EDT by Stanislav Tkachenko
Modified: 2018-02-21 11:54 EST (History)
13 users (show)

See Also:
Fixed In Version: foreman-selinux-1.15.6.2,foreman-selinux-1.15.6.2-1
Doc Type: Known Issue
Doc Text:
There is a conflict between SELinux modules, container-selinux and foreman-selinux. This is caused by the redeclaration of docker_port_t. This prevents containers being started, both from Satellite and manually. To work around this issue, set foreman_t to permissive by running: `semanage permissive foreman_t` If this occurs during the 6.3 Beta and blocks your completion or evaluation of the Beta please contact Satellite Engineering via the beta email list for assistance.
Story Points: ---
Clone Of: 1414821
Environment:
Last Closed: 2018-02-21 11:54:17 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Foreman Issue Tracker 18284 None None None 2017-08-07 11:15 EDT
Red Hat Knowledge Base (Solution) 3198142 None None None 2017-09-27 13:49 EDT

  None (edit)
Comment 2 pm-sat@redhat.com 2017-08-07 12:15:40 EDT
Upstream bug assigned to dlobatog@redhat.com
Comment 3 pm-sat@redhat.com 2017-08-07 12:15:44 EDT
Upstream bug assigned to dlobatog@redhat.com
Comment 4 Lukas Zapletal 2017-09-01 06:17:45 EDT
*** Bug 1478142 has been marked as a duplicate of this bug. ***
Comment 5 pm-sat@redhat.com 2017-09-08 08:16:30 EDT
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/18284 has been resolved.
Comment 6 Roman Plevka 2017-11-01 05:45:26 EDT
FAILEDQA
since sat6.3.0-20
I think this should be FailedQA as this fix is probably causing foreman unable to perform "test connection" while adding a docker compute resource:

#audit.log:
type=AVC msg=audit(1509529091.800:4278): avc:  denied  { name_connect } for  pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1509529091.800:4278): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f3908845570 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)
type=AVC msg=audit(1509529091.801:4279): avc:  denied  { name_connect } for  pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1509529091.801:4279): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f391788f620 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)
type=AVC msg=audit(1509529091.802:4280): avc:  denied  { name_connect } for  pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1509529091.802:4280): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f391788d640 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)
type=AVC msg=audit(1509529091.802:4281): avc:  denied  { name_connect } for  pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1509529091.802:4281): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f390ef3f528 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)
type=AVC msg=audit(1509529091.812:4282): avc:  denied  { name_connect } for  pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1509529091.812:4282): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f3909304100 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)

foreman shows the "permission denied" error notification and overall is not able to use the CR.

I actually don't have a direct proof this regression has been caused by this fix, however it seems to be pretty much the only BZ related to selinux and docker that appeared in the failed build.
Comment 7 Lukas Zapletal 2017-11-02 07:52:33 EDT
Roman,

you don't have container-selinux package installed, install it and reload Foreman policy, then it will work.

We can't do anything about it, the docker port is owned by this package and we cannot add an allow rule without this dependency.

We cannot make it a hard RPM dependency because it is in RHEL extras repo, that would mean we need to change requirements. Let me start new thread about this how installer team want to approach this.

If you can, please confirm it works with the package installed.
Comment 8 Roman Plevka 2017-11-02 09:32:02 EDT
(In reply to Lukas Zapletal from comment #7)
> Roman,
> 
> you don't have container-selinux package installed, install it and reload
> Foreman policy, then it will work.
> 
> We can't do anything about it, the docker port is owned by this package and
> we cannot add an allow rule without this dependency.
> 
> We cannot make it a hard RPM dependency because it is in RHEL extras repo,
> that would mean we need to change requirements. Let me start new thread
> about this how installer team want to approach this.
> 
> If you can, please confirm it works with the package installed.

Thanks for suggestions.
The package actually seems to be installed already:

# yum list container-selinux
Installed Packages
container-selinux.noarch           2:2.28-1.git85ce147.el7           @rhel-7-server-extras-rpms

- not sure, how to reload the policy, but i tried the following with the following result:

# semodule -r foreman
libsemanage.semanage_direct_remove_key: Removing last foreman module (no other foreman module exists at another priority).
Failed to resolve typeattributeset statement at /etc/selinux/targeted/tmp/modules/400/katello/cil:4
Comment 9 Lukas Zapletal 2017-11-06 09:13:44 EST
I worked with SELinux team (lvrabec) on this and a patch was created upstream:

https://github.com/theforeman/foreman-selinux/pull/72
Comment 10 Lukas Zapletal 2017-11-06 09:14:16 EST
QA GA: Workaround, put foreman_t into permissive:

semanage permissive foreman_t
Comment 11 Lukas Zapletal 2017-12-05 07:44:00 EST
foreman-selinux-1.15.5-1.el7sat.noarch

This has been already fixed and shipped.
Comment 12 Lukas Pramuk 2017-12-19 10:51:51 EST
VERIFIED.

@Satellite 6.3.0 Snap27
foreman-selinux-1.15.5-1.el7sat.noarch

by manual reproducer in comment#0:

1. Install container-selinux

# yum install container-selinux

# semodule -l | grep -e container -e docker
container	2.33.0

2. Due to open BZ #1527052 assign manually tcp port to container_port_t label

# semanage port -a -t container_port_t -p tcp 2375

# semanage port -l |grep -e container -e docker
container_port_t               tcp      2375

3. Install foreman-selinux

# yum install foreman-selinux
...
Running transaction
  Installing : foreman-selinux-1.15.5-1.el7sat.noarch                                                                            1/1 
  Verifying  : foreman-selinux-1.15.5-1.el7sat.noarch                                                                            1/1 

Installed:
  foreman-selinux.noarch 0:1.15.5-1.el7sat                                                                                           

Complete!

>>> no warning/error message during rpm installation

# semanage fcontext -l | grep foreman
/etc/foreman(/.*)?                                 all files          system_u:object_r:etc_t:s0 
/etc/puppet/node.rb                                regular file       system_u:object_r:foreman_enc_t:s0 
/var/lib/foreman(/.*)?                             all files          system_u:object_r:foreman_lib_t:s0 
/var/log/foreman(/.*)?                             all files          system_u:object_r:foreman_log_t:s0 
/var/run/foreman(/.*)?                             all files          system_u:object_r:foreman_var_run_t:s0 
/usr/share/foreman/.ssh(/.*)?                      all files          system_u:object_r:ssh_home_t:s0 
/var/lib/foreman/db/(.*.sqlite3)?                  all files          system_u:object_r:foreman_db_t:s0 
/etc/puppetlabs/puppet/node.rb                     regular file       system_u:object_r:foreman_enc_t:s0 
/usr/share/foreman/config/hooks(/.*)?              all files          system_u:object_r:bin_t:s0 
/usr/share/gems/gems/foreman-tasks-.*/bin/foreman-tasks regular file       system_u:object_r:foreman_tasks_exec_t:s0 
/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-.*/bin/foreman-tasks regular file       system_u:object_r:foreman_tasks_exec_t:s0 
/usr/share/foreman/extras/noVNC/websockify\.py     all files          system_u:object_r:websockify_exec_t:s0 

>>> foreman selinux labels are present => module loaded correctly, no conflict occurred
Comment 13 Lukas Pramuk 2018-01-08 06:03:09 EST
FailedQA.

@Satellite 6.3.0 Snap30
foreman-selinux-1.15.5-1.el7sat.noarch

Though there is no conflict between module I have to fail this bz again. The original cause of the conflict was the dubious definition of docker_port_t.
When I try to use docker via tcp the passenger connection to docker_port_t is still denied! (see BZ #1531075)
Comment 14 Lukas Pramuk 2018-01-08 06:04:25 EST
*** Bug 1531075 has been marked as a duplicate of this bug. ***
Comment 22 Lukas Pramuk 2018-01-16 05:46:31 EST
VERIFIED.

@Satellite 6.3.0 Snap32
foreman-selinux-1.15.6.2-1.el7sat.noarch

by manual reproducer in comment#0:

1. Install container-selinux
# yum install container-selinux
# semodule -l | grep container
container	2.33.0

2. Install foreman-selinux
# yum install foreman-selinux
...
Running transaction
  Installing : foreman-selinux-1.15.6.2-1.el7sat.noarch                                                                            1/1 
  Verifying  : foreman-selinux-1.15.6.2-1.el7sat.noarch                                                                            1/1 

Installed:
  foreman-selinux.noarch 0:1.15.6.2-1.el7sat                                                                                           

Complete!

>>> no warning/error message during rpm installation

3. Check foreman module is loaded correctly
# semanage fcontext -l | grep foreman
/etc/foreman(/.*)?                                 all files          system_u:object_r:etc_t:s0 
/etc/puppet/node.rb                                regular file       system_u:object_r:foreman_enc_t:s0 
/var/lib/foreman(/.*)?                             all files          system_u:object_r:foreman_lib_t:s0 
/var/log/foreman(/.*)?                             all files          system_u:object_r:foreman_log_t:s0 
/var/run/foreman(/.*)?                             all files          system_u:object_r:foreman_var_run_t:s0 
/usr/share/foreman/.ssh(/.*)?                      all files          system_u:object_r:ssh_home_t:s0 
/var/lib/foreman/db/(.*.sqlite3)?                  all files          system_u:object_r:foreman_db_t:s0 
/etc/puppetlabs/puppet/node.rb                     regular file       system_u:object_r:foreman_enc_t:s0 
/usr/share/foreman/config/hooks(/.*)?              all files          system_u:object_r:bin_t:s0 
/usr/share/gems/gems/foreman-tasks-.*/bin/foreman-tasks regular file       system_u:object_r:foreman_tasks_exec_t:s0 
/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-.*/bin/foreman-tasks regular file       system_u:object_r:foreman_tasks_exec_t:s0 
/usr/share/foreman/extras/noVNC/websockify\.py     all files          system_u:object_r:websockify_exec_t:s0 

>>> foreman selinux labels are present => module loaded correctly, no conflict occurred

4. Install Satellite and docker and check that passenger can connect to the docker port 2375/tcp
@URI /compute_resources/new 
- fill in URL http://localhost:2375
- hit button [Test Connection]

>>> Success: Test connection was successful
Comment 23 Andrew Dahms 2018-02-15 22:29:06 EST
Setting the 'requires_doc_text' flag to '-'.
Comment 24 pm-sat@redhat.com 2018-02-21 11:54:17 EST
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA.
> 
> For information on the advisory, and where to find the updated files, follow the link below.
> 
> If the solution does not work for you, open a new bug report.
> 
> https://access.redhat.com/errata/RHSA-2018:0336

Note You need to log in before you can comment on or make changes to this bug.