Bug 1577818 - SELinux disable script does not remove foreman_container_port_t
Summary: SELinux disable script does not remove foreman_container_port_t
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: SELinux
Version: 6.3.1
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: 6.4.0
Assignee: Lukas Zapletal
QA Contact: jcallaha
URL: http://projects.theforeman.org/issues...
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-14 08:19 UTC by Kenny Tordeurs
Modified: 2021-12-10 16:08 UTC (History)
8 users (show)

Fixed In Version: foreman-selinux-1.18.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-16 18:58:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
running container (29.22 KB, image/png)
2018-07-31 15:26 UTC, jcallaha
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 23619 0 Normal Closed SELinux disable script does not remove foreman_container_port_t 2020-03-20 15:24:38 UTC

Description Kenny Tordeurs 2018-05-14 08:19:17 UTC
Description of problem:
Errors seen if not enabled:

In webui: 
~~~
There was an error listing VMs: Permission denied - connect(2) for 10.44.130.52:2375 (Errno::EACCES)
~~~

in audit.log
~~~
type=AVC msg=audit(1526284925.939:332463): avc:  denied  { name_connect } for  pid=6233 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1526284925.939:332463): arch=c000003e syscall=42 success=no exit=-13 a0=14 a1=8c36c50 a2=10 a3=7f460ad2b7e0 items=0 ppid=1 pid=6233 auid=4294967295 uid=994 gid=993 euid=994 suid=994 fsuid=994 egid=993 sgid=993 fsgid=993 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)
type=PROCTITLE msg=audit(1526284925.939:332463): proctitle=50617373656E676572205261636B4170703A202F7573722F73686172652F666F72656D616E
type=AVC msg=audit(1526284925.939:332464): avc:  denied  { name_connect } for  pid=6233 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1526284925.939:332464): arch=c000003e syscall=42 success=no exit=-13 a0=14 a1=8bfb010 a2=10 a3=2 items=0 ppid=1 pid=6233 auid=4294967295 uid=994 gid=993 euid=994 suid=994 fsuid=994 egid=993 sgid=993 fsgid=993 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)
type=PROCTITLE msg=audit(1526284925.939:332464): proctitle=50617373656E676572205261636B4170703A202F7573722F73686172652F666F72656D616E
type=AVC msg=audit(1526284925.959:332465): avc:  denied  { name_connect } for  pid=6233 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1526284925.959:332465): arch=c000003e syscall=42 success=no exit=-13 a0=14 a1=8bd6da0 a2=10 a3=2 items=0 ppid=1 pid=6233 auid=4294967295 uid=994 gid=993 euid=994 suid=994 fsuid=994 egid=993 sgid=993 fsgid=993 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)
type=PROCTITLE msg=audit(1526284925.959:332465): proctitle=50617373656E676572205261636B4170703A202F7573722F73686172652F666F72656D616E
type=AVC msg=audit(1526284925.959:332466): avc:  denied  { name_connect } for  pid=6233 comm="diagnostic_con*" dest=2375 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1526284925.959:332466): arch=c000003e syscall=42 success=no exit=-13 a0=14 a1=8b92a10 a2=10 a3=2 items=0 ppid=1 pid=6233 auid=4294967295 uid=994 gid=993 euid=994 suid=994 fsuid=994 egid=993 sgid=993 fsgid=993 tty=(none) ses=4294967295 comm="diagnostic_con*" exe="/opt/rh/rh-ruby23/root/usr/bin/ruby" subj=system_u:system_r:passenger_t:s0 key=(null)
type=PROCTITLE msg=audit(1526284925.959:332466): proctitle=50617373656E676572205261636B4170703A202F7573722F73686172652F666F72656D616E
~~~

Set the boolean to true:
# setsebool passenger_can_connect_all true

Verify it was set:
# getsebool passenger_can_connect_all 
passenger_can_connect_all --> on

Version-Release number of selected component (if applicable):
[root@provisioning ~]# rpm -qa | grep satellite
satellite-cli-6.3.1-3.el7sat.noarch
satellite-common-6.3.1-3.el7sat.noarch
satellite-6.3.1-3.el7sat.noarch
satellite-capsule-6.3.1-3.el7sat.noarch
satellite-clone-1.2.2-1.el7sat.noarch
satellite-installer-6.3.0.12-1.el7sat.noarch
tfm-rubygem-foreman_theme_satellite-1.0.4.17-1.el7sat.noarch

[root@provisioning ~]# rpm -qa | grep selinux
selinux-policy-3.13.1-192.el7_5.3.noarch
foreman-selinux-1.15.6.2-1.el7sat.noarch
pulp-selinux-2.13.4.9-1.el7sat.noarch
candlepin-selinux-2.1.15-1.el7.noarch
libselinux-2.5-12.el7.x86_64
libselinux-2.5-12.el7.i686
katello-selinux-3.0.2-1.el7sat.noarch
libselinux-python-2.5-12.el7.x86_64
selinux-policy-targeted-3.13.1-192.el7_5.3.noarch
libselinux-utils-2.5-12.el7.x86_64
container-selinux-2.28-1.git85ce147.el7.noarch
libselinux-ruby-2.5-12.el7.x86_64


How reproducible:
100%

Steps to Reproduce:
1. Create docker compute resource
2. Connection will fail when boolean is false
3. Connect will succeed when boolean is true

Set boolean to true:
# setsebool passenger_can_connect_all true

Actual results:
Failure to connect

Expected results:
It to be documented or set by Satellite


Additional info:

Comment 1 Kenny Tordeurs 2018-05-14 08:26:21 UTC
According to [0] there should be no need to execute any additional SElinux commands if using the default port 2375 or 2376

Snippet:
~~~
IMPORTANT
Use either port 2375 or 2376 for the connection. This is because the Satellite Server contains special SELinux rules to allow access to these ports. Using an alternative port results in authentication failure.
~~~

[0] https://access.redhat.com/documentation/en-us/red_hat_satellite/6.3/html-single/provisioning_guide/#Provisioning_Containers-Configuring_the_Red_Hat_Enterprise_Linux_Atomic_Host

Comment 4 Lukas Zapletal 2018-05-17 07:02:30 UTC
Kenny, please send me output of the following commands:

[root@qe-testing-rhel6 ~]# rpm -q foreman-selinux docker-selinux container-selinux
foreman-selinux-1.18.0-0.2.el7sat.noarch
package docker-selinux is not installed
container-selinux-2.55-1.el7.noarch

[root@qe-testing-rhel6 ~]# semanage port -l | grep 2375
foreman_container_port_t       tcp      2376, 2375

[root@qe-testing-rhel6 ~]# /usr/sbin/semanage port -E
port -a -t foreman_container_port_t -p tcp 2375
port -a -t foreman_container_port_t -p tcp 2376
port -a -t websm_port_t -p tcp 9400-14999

There were some renames of docker -> container in RHEL and this caused unclean upgrade path.

Comment 5 Kenny Tordeurs 2018-05-17 07:41:43 UTC
@Lukas.

[root@provisioning tmp]# rpm -q foreman-selinux docker-selinux container-selinux
~~~
foreman-selinux-1.15.6.2-1.el7sat.noarch
package docker-selinux is not installed
container-selinux-2.28-1.git85ce147.el7.noarch
~~~

I did this with the boolean set to false:
[root@provisioning tmp]# getsebool passenger_can_connect_all 
~~~
passenger_can_connect_all --> off
~~~

[root@provisioning tmp]# semanage port -l | grep 2375
~~~
no output
~~~

[root@provisioning tmp]# /usr/sbin/semanage port -E
~~~
port -a -t foreman_container_port_t -p tcp 2376
~~~

Comment 6 Lukas Zapletal 2018-05-17 08:05:41 UTC
Since there is a workaround, I am setting to medium.

Normally calling

foreman-selinux-disable
foreman-selinux-enable

should fix this. That's what yum upgrade foreman-selinux post scriplet does. But I found an issue in disable script, it does not work properly.

Can you please try to modify this file:

vi /usr/sbin/foreman-selinux-disable

According to: https://github.com/theforeman/foreman-selinux/pull/80

Then please disable/enable:

[root@qe-testing-rhel6 ~]# /usr/sbin/semanage port -E
port -a -t foreman_container_port_t -p tcp 2375
port -a -t foreman_container_port_t -p tcp 2376


[root@qe-testing-rhel6 ~]# /usr/sbin/foreman-selinux-disable
libsemanage.semanage_direct_remove_key: Removing last foreman module (no other foreman module exists at another priority).

[root@qe-testing-rhel6 ~]# /usr/sbin/semanage port -E

[root@qe-testing-rhel6 ~]# /usr/sbin/foreman-selinux-enable 

[root@qe-testing-rhel6 ~]# /usr/sbin/semanage port -E
port -a -t foreman_container_port_t -p tcp 2375
port -a -t foreman_container_port_t -p tcp 2376

If this is confirmed, I will turn this BZ into "disable does not work" bug, that should fix it.

Comment 7 Kenny Tordeurs 2018-05-17 08:38:17 UTC
[root@provisioning ~]# gendiff /usr/sbin/ .bkp
diff -up /usr/sbin/foreman-selinux-disable.bkp /usr/sbin/foreman-selinux-disable
--- /usr/sbin/foreman-selinux-disable.bkp	2018-05-17 10:31:36.873401207 +0200
+++ /usr/sbin/foreman-selinux-disable	2018-05-17 10:32:36.959882650 +0200
@@ -10,7 +10,7 @@ do
     # Remove all user defined ports (including the default one)
     # (docker and elastic can be removed in future release)
     /usr/sbin/semanage port -E | \
-      grep -E '(elasticsearch|docker|foreman_osapi_compute)_port_t' | \
+      grep -E '(elasticsearch|docker|foreman_.*)_port_t' | \
       sed s/-a/-d/g | \
       /usr/sbin/semanage -S $selinuxvariant -i -
     # Unload policy


# getsebool passenger_can_connect_all 
passenger_can_connect_all --> off

[root@provisioning ~]# /usr/sbin/foreman-selinux-disable
libsemanage.semanage_direct_remove_key: Removing last foreman module (no other foreman module exists at another priority).
Failed to resolve typeattributeset statement at /etc/selinux/targeted/tmp/modules/400/katello/cil:4
/usr/sbin/semodule:  Failed!

=> Seems to have some failure but it seems the ports are available after the foreman-selinux-enable 

[root@provisioning ~]# /usr/sbin/semanage port -E
[root@provisioning ~]# /usr/sbin/foreman-selinux-enable 

[root@provisioning ~]# /usr/sbin/semanage port -E
port -a -t foreman_container_port_t -p tcp 2375
port -a -t foreman_container_port_t -p tcp 2376

Comment 8 Lukas Zapletal 2018-05-17 09:50:36 UTC
Oh that error is because katello must be disabled first:

katello-selinux-disable
...
...
katello-selinux-enable

Okay thanks.

Comment 14 jcallaha 2018-07-31 15:26:32 UTC
Created attachment 1471872 [details]
running container

Verified in Satellite 6.4 Snap 14.

The disable scripts now correctly affects ports 2375 and 2376. See below the usage for turning this on and off.

-bash-4.2# semanage port -l | grep foreman
foreman_container_port_t       tcp      2376, 2375
-bash-4.2# 
-bash-4.2# katello-selinux-disable
libsemanage.semanage_direct_remove_key: Removing last katello module (no other katello module exists at another priority).
-bash-4.2# foreman-selinux-disable
libsemanage.semanage_direct_remove_key: Removing last foreman module (no other foreman module exists at another priority).
-bash-4.2# 
-bash-4.2# semanage port -l | grep foreman
-bash-4.2# 
-bash-4.2# foreman-selinux-enable
-bash-4.2# katello-selinux-enable
-bash-4.2# 
-bash-4.2# semanage port -l | grep foreman
foreman_container_port_t       tcp      2376, 2375

Additionally, I created a container with Satellite using its internal docker compute resource, while the ports were managed. See attached screenshot.

Comment 15 Bryan Kearney 2018-10-16 18:58:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2927


Note You need to log in before you can comment on or make changes to this bug.