Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1478966 - foreman-selinux is conflicting with container-selinux
Summary: foreman-selinux is conflicting with container-selinux
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: SELinux
Version: 6.3.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: Unspecified
Assignee: Daniel Lobato Garcia
QA Contact: Lukas Pramuk
URL: http://projects.theforeman.org/issues...
Whiteboard:
: 1478142 1531075 (view as bug list)
Depends On: 1414821
Blocks: 1186913
TreeView+ depends on / blocked
 
Reported: 2017-08-07 15:15 UTC by Stanislav Tkachenko
Modified: 2021-06-10 12:45 UTC (History)
13 users (show)

Fixed In Version: foreman-selinux-1.15.6.2,foreman-selinux-1.15.6.2-1
Doc Type: Known Issue
Doc Text:
There is a conflict between SELinux modules, container-selinux and foreman-selinux. This is caused by the redeclaration of docker_port_t. This prevents containers being started, both from Satellite and manually. To work around this issue, set foreman_t to permissive by running: `semanage permissive foreman_t` If this occurs during the 6.3 Beta and blocks your completion or evaluation of the Beta please contact Satellite Engineering via the beta email list for assistance.
Clone Of: 1414821
Environment:
Last Closed: 2018-02-21 16:54:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 18284 0 Normal Closed foreman-selinux is conflicting with container-selinux 2021-01-05 21:13:25 UTC
Red Hat Knowledge Base (Solution) 3198142 0 None None None 2017-09-27 17:49:27 UTC

Comment 2 Satellite Program 2017-08-07 16:15:40 UTC
Upstream bug assigned to dlobatog

Comment 3 Satellite Program 2017-08-07 16:15:44 UTC
Upstream bug assigned to dlobatog

Comment 4 Lukas Zapletal 2017-09-01 10:17:45 UTC
*** Bug 1478142 has been marked as a duplicate of this bug. ***

Comment 5 Satellite Program 2017-09-08 12:16:30 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/18284 has been resolved.

Comment 6 Roman Plevka 2017-11-01 09:45:26 UTC
FAILEDQA
since sat6.3.0-20
I think this should be FailedQA as this fix is probably causing foreman unable to perform "test connection" while adding a docker compute resource:

#audit.log:
type=AVC msg=audit(1509529091.800:4278): avc:  denied  { name_connect } for  pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1509529091.800:4278): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f3908845570 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)
type=AVC msg=audit(1509529091.801:4279): avc:  denied  { name_connect } for  pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1509529091.801:4279): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f391788f620 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)
type=AVC msg=audit(1509529091.802:4280): avc:  denied  { name_connect } for  pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1509529091.802:4280): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f391788d640 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)
type=AVC msg=audit(1509529091.802:4281): avc:  denied  { name_connect } for  pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1509529091.802:4281): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f390ef3f528 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)
type=AVC msg=audit(1509529091.812:4282): avc:  denied  { name_connect } for  pid=10260 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1509529091.812:4282): arch=c000003e syscall=42 success=no exit=-13 a0=b a1=7f3909304100 a2=10 a3=2 items=0 ppid=1 pid=10260 auid=4294967295 uid=992 gid=989 euid=992 suid=992 fsuid=992 egid=989 sgid=989 fsgid=989 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)

foreman shows the "permission denied" error notification and overall is not able to use the CR.

I actually don't have a direct proof this regression has been caused by this fix, however it seems to be pretty much the only BZ related to selinux and docker that appeared in the failed build.

Comment 7 Lukas Zapletal 2017-11-02 11:52:33 UTC
Roman,

you don't have container-selinux package installed, install it and reload Foreman policy, then it will work.

We can't do anything about it, the docker port is owned by this package and we cannot add an allow rule without this dependency.

We cannot make it a hard RPM dependency because it is in RHEL extras repo, that would mean we need to change requirements. Let me start new thread about this how installer team want to approach this.

If you can, please confirm it works with the package installed.

Comment 8 Roman Plevka 2017-11-02 13:32:02 UTC
(In reply to Lukas Zapletal from comment #7)
> Roman,
> 
> you don't have container-selinux package installed, install it and reload
> Foreman policy, then it will work.
> 
> We can't do anything about it, the docker port is owned by this package and
> we cannot add an allow rule without this dependency.
> 
> We cannot make it a hard RPM dependency because it is in RHEL extras repo,
> that would mean we need to change requirements. Let me start new thread
> about this how installer team want to approach this.
> 
> If you can, please confirm it works with the package installed.

Thanks for suggestions.
The package actually seems to be installed already:

# yum list container-selinux
Installed Packages
container-selinux.noarch           2:2.28-1.git85ce147.el7           @rhel-7-server-extras-rpms

- not sure, how to reload the policy, but i tried the following with the following result:

# semodule -r foreman
libsemanage.semanage_direct_remove_key: Removing last foreman module (no other foreman module exists at another priority).
Failed to resolve typeattributeset statement at /etc/selinux/targeted/tmp/modules/400/katello/cil:4

Comment 9 Lukas Zapletal 2017-11-06 14:13:44 UTC
I worked with SELinux team (lvrabec) on this and a patch was created upstream:

https://github.com/theforeman/foreman-selinux/pull/72

Comment 10 Lukas Zapletal 2017-11-06 14:14:16 UTC
QA GA: Workaround, put foreman_t into permissive:

semanage permissive foreman_t

Comment 11 Lukas Zapletal 2017-12-05 12:44:00 UTC
foreman-selinux-1.15.5-1.el7sat.noarch

This has been already fixed and shipped.

Comment 12 Lukas Pramuk 2017-12-19 15:51:51 UTC
VERIFIED.

@Satellite 6.3.0 Snap27
foreman-selinux-1.15.5-1.el7sat.noarch

by manual reproducer in comment#0:

1. Install container-selinux

# yum install container-selinux

# semodule -l | grep -e container -e docker
container	2.33.0

2. Due to open BZ #1527052 assign manually tcp port to container_port_t label

# semanage port -a -t container_port_t -p tcp 2375

# semanage port -l |grep -e container -e docker
container_port_t               tcp      2375

3. Install foreman-selinux

# yum install foreman-selinux
...
Running transaction
  Installing : foreman-selinux-1.15.5-1.el7sat.noarch                                                                            1/1 
  Verifying  : foreman-selinux-1.15.5-1.el7sat.noarch                                                                            1/1 

Installed:
  foreman-selinux.noarch 0:1.15.5-1.el7sat                                                                                           

Complete!

>>> no warning/error message during rpm installation

# semanage fcontext -l | grep foreman
/etc/foreman(/.*)?                                 all files          system_u:object_r:etc_t:s0 
/etc/puppet/node.rb                                regular file       system_u:object_r:foreman_enc_t:s0 
/var/lib/foreman(/.*)?                             all files          system_u:object_r:foreman_lib_t:s0 
/var/log/foreman(/.*)?                             all files          system_u:object_r:foreman_log_t:s0 
/var/run/foreman(/.*)?                             all files          system_u:object_r:foreman_var_run_t:s0 
/usr/share/foreman/.ssh(/.*)?                      all files          system_u:object_r:ssh_home_t:s0 
/var/lib/foreman/db/(.*.sqlite3)?                  all files          system_u:object_r:foreman_db_t:s0 
/etc/puppetlabs/puppet/node.rb                     regular file       system_u:object_r:foreman_enc_t:s0 
/usr/share/foreman/config/hooks(/.*)?              all files          system_u:object_r:bin_t:s0 
/usr/share/gems/gems/foreman-tasks-.*/bin/foreman-tasks regular file       system_u:object_r:foreman_tasks_exec_t:s0 
/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-.*/bin/foreman-tasks regular file       system_u:object_r:foreman_tasks_exec_t:s0 
/usr/share/foreman/extras/noVNC/websockify\.py     all files          system_u:object_r:websockify_exec_t:s0 

>>> foreman selinux labels are present => module loaded correctly, no conflict occurred

Comment 13 Lukas Pramuk 2018-01-08 11:03:09 UTC
FailedQA.

@Satellite 6.3.0 Snap30
foreman-selinux-1.15.5-1.el7sat.noarch

Though there is no conflict between module I have to fail this bz again. The original cause of the conflict was the dubious definition of docker_port_t.
When I try to use docker via tcp the passenger connection to docker_port_t is still denied! (see BZ #1531075)

Comment 14 Lukas Pramuk 2018-01-08 11:04:25 UTC
*** Bug 1531075 has been marked as a duplicate of this bug. ***

Comment 22 Lukas Pramuk 2018-01-16 10:46:31 UTC
VERIFIED.

@Satellite 6.3.0 Snap32
foreman-selinux-1.15.6.2-1.el7sat.noarch

by manual reproducer in comment#0:

1. Install container-selinux
# yum install container-selinux
# semodule -l | grep container
container	2.33.0

2. Install foreman-selinux
# yum install foreman-selinux
...
Running transaction
  Installing : foreman-selinux-1.15.6.2-1.el7sat.noarch                                                                            1/1 
  Verifying  : foreman-selinux-1.15.6.2-1.el7sat.noarch                                                                            1/1 

Installed:
  foreman-selinux.noarch 0:1.15.6.2-1.el7sat                                                                                           

Complete!

>>> no warning/error message during rpm installation

3. Check foreman module is loaded correctly
# semanage fcontext -l | grep foreman
/etc/foreman(/.*)?                                 all files          system_u:object_r:etc_t:s0 
/etc/puppet/node.rb                                regular file       system_u:object_r:foreman_enc_t:s0 
/var/lib/foreman(/.*)?                             all files          system_u:object_r:foreman_lib_t:s0 
/var/log/foreman(/.*)?                             all files          system_u:object_r:foreman_log_t:s0 
/var/run/foreman(/.*)?                             all files          system_u:object_r:foreman_var_run_t:s0 
/usr/share/foreman/.ssh(/.*)?                      all files          system_u:object_r:ssh_home_t:s0 
/var/lib/foreman/db/(.*.sqlite3)?                  all files          system_u:object_r:foreman_db_t:s0 
/etc/puppetlabs/puppet/node.rb                     regular file       system_u:object_r:foreman_enc_t:s0 
/usr/share/foreman/config/hooks(/.*)?              all files          system_u:object_r:bin_t:s0 
/usr/share/gems/gems/foreman-tasks-.*/bin/foreman-tasks regular file       system_u:object_r:foreman_tasks_exec_t:s0 
/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-.*/bin/foreman-tasks regular file       system_u:object_r:foreman_tasks_exec_t:s0 
/usr/share/foreman/extras/noVNC/websockify\.py     all files          system_u:object_r:websockify_exec_t:s0 

>>> foreman selinux labels are present => module loaded correctly, no conflict occurred

4. Install Satellite and docker and check that passenger can connect to the docker port 2375/tcp
@URI /compute_resources/new 
- fill in URL http://localhost:2375
- hit button [Test Connection]

>>> Success: Test connection was successful

Comment 23 Andrew Dahms 2018-02-16 03:29:06 UTC
Setting the 'requires_doc_text' flag to '-'.

Comment 24 Satellite Program 2018-02-21 16:54:17 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA.
> 
> For information on the advisory, and where to find the updated files, follow the link below.
> 
> If the solution does not work for you, open a new bug report.
> 
> https://access.redhat.com/errata/RHSA-2018:0336


Note You need to log in before you can comment on or make changes to this bug.