Bug 2096387 - icmp health monitors are broken in rhosp16.1
Summary: icmp health monitors are broken in rhosp16.1
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-octavia
Version: 16.1 (Train)
Hardware: x86_64
OS: Linux
low
low
Target Milestone: z9
: 16.1 (Train on RHEL 8.2)
Assignee: Gregory Thiemonge
QA Contact: Bruna Bonguardo
URL:
Whiteboard:
Depends On:
Blocks: 2123318 2125610
TreeView+ depends on / blocked
 
Reported: 2022-06-13 17:04 UTC by David Hill
Modified: 2022-12-23 01:31 UTC (History)
11 users (show)

Fixed In Version: openstack-octavia-5.0.3-1.20220915133621.8c32d2e.el8ost openstack-selinux-0.8.24-1.20220919133635.26243bf.el8ost
Doc Type: Bug Fix
Doc Text:
Before this update, a SELinux issue triggered errors when using the ICMP monitor in the Load-balancing service (octavia) amphora driver. With this update, the SELinux issue is fixed.
Clone Of:
: 2123318 (view as bug list)
Environment:
Last Closed: 2022-12-07 20:27:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github redhat-openstack openstack-selinux pull 95 0 None open Add new boolean os_haproxy_ping for Octavia amphora 2022-07-06 10:09:19 UTC
OpenStack gerrit 840315 0 None MERGED Apply openstack-selinux policies in Centos amphorae 2022-08-22 12:45:33 UTC
OpenStack gerrit 853999 0 None MERGED Apply openstack-selinux policies in Centos amphorae 2022-09-07 06:37:53 UTC
RDO 44614 0 None None None 2022-08-22 14:21:53 UTC
Red Hat Issue Tracker OSP-15652 0 None None None 2022-06-13 17:09:42 UTC
Red Hat Knowledge Base (Solution) 6962975 0 None None None 2022-06-13 17:19:24 UTC
Red Hat Product Errata RHBA-2022:8795 0 None None None 2022-12-07 20:27:52 UTC

Description David Hill 2022-06-13 17:04:53 UTC
Description of problem:
icmp health monitors are broken in rhosp16.1 to what appears to be a selinux restriction:

type=AVC msg=audit(1655139457.142:1084): avc:  denied  { execute } for  pid=7087 comm="haproxy" name="bash" dev="vda1" ino=4215375 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file permissive=0
type=AVC msg=audit(1655139460.518:1090): avc:  denied  { execute } for  pid=7093 comm="haproxy" name="bash" dev="vda1" ino=4215375 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file permissive=0
type=AVC msg=audit(1655139461.328:1091): avc:  denied  { execute } for  pid=7094 comm="haproxy" name="bash" dev="vda1" ino=4215375 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file permissive=0
type=AVC msg=audit(1655139462.247:1113): avc:  denied  { execute } for  pid=7132 comm="haproxy" name="bash" dev="vda1" ino=4215375 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file permissive=0
type=AVC msg=audit(1655139465.621:1118): avc:  denied  { execute } for  pid=7157 comm="haproxy" name="bash" dev="vda1" ino=4215375 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file permissive=0
type=AVC msg=audit(1655139466.431:1119): avc:  denied  { execute } for  pid=7158 comm="haproxy" name="bash" dev="vda1" ino=4215375 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file permissive=0
type=AVC msg=audit(1655139467.349:1120): avc:  denied  { execute } for  pid=7159 comm="haproxy" name="bash" dev="vda1" ino=4215375 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file permissive=0
type=AVC msg=audit(1655139470.722:1121): avc:  denied  { execute } for  pid=7164 comm="haproxy" name="bash" dev="vda1" ino=4215375 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file permissive=0
type=AVC msg=audit(1655139471.532:1122): avc:  denied  { execute } for  pid=7165 comm="haproxy" name="bash" dev="vda1" ino=4215375 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file permissive=0
type=AVC msg=audit(1655139472.450:1125): avc:  denied  { execute } for  pid=7167 comm="haproxy" name="bash" dev="vda1" ino=4215375 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file permissive=1
type=AVC msg=audit(1655139472.452:1126): avc:  denied  { execute } for  pid=7168 comm="ping-wrapper.sh" name="ping" dev="vda1" ino=4215754 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:ping_exec_t:s0 tclass=file permissive=1
type=AVC msg=audit(1655139472.452:1126): avc:  denied  { read open } for  pid=7168 comm="ping-wrapper.sh" path="/usr/bin/ping" dev="vda1" ino=4215754 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:ping_exec_t:s0 tclass=file permissive=1
type=AVC msg=audit(1655139472.452:1126): avc:  denied  { execute_no_trans } for  pid=7168 comm="ping-wrapper.sh" path="/usr/bin/ping" dev="vda1" ino=4215754 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:ping_exec_t:s0 tclass=file permissive=1
type=AVC msg=audit(1655139472.457:1127): avc:  denied  { setcap } for  pid=7168 comm="ping" scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:system_r:haproxy_t:s0 tclass=process permissive=1
type=AVC msg=audit(1655139472.457:1128): avc:  denied  { create } for  pid=7168 comm="ping" scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:system_r:haproxy_t:s0 tclass=icmp_socket permissive=1
type=AVC msg=audit(1655139472.457:1129): avc:  denied  { create } for  pid=7168 comm="ping" scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:system_r:haproxy_t:s0 tclass=rawip_socket permissive=1
type=AVC msg=audit(1655139472.457:1130): avc:  denied  { setopt } for  pid=7168 comm="ping" lport=1 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:system_r:haproxy_t:s0 tclass=rawip_socket permissive=1
type=AVC msg=audit(1655139472.457:1131): avc:  denied  { getopt } for  pid=7168 comm="ping" lport=1 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:system_r:haproxy_t:s0 tclass=rawip_socket permissive=1
type=AVC msg=audit(1655139496.100:1133): avc:  denied  { execmem } for  pid=7217 comm="haproxy" scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:system_r:haproxy_t:s0 tclass=process permissive=1




[root@amphora-511460b7-c49e-4e3c-aa25-b40b6a162d4a audit]# grep denied audit.log  | audit2allow -R

require {
        type haproxy_t;
        type shell_exec_t;
        type ping_exec_t;
        class process { execmem setcap };
        class icmp_socket create;
        class rawip_socket { create getopt setopt };
        class file { execute execute_no_trans open read };
}

#============= haproxy_t ==============
allow haproxy_t ping_exec_t:file { execute execute_no_trans open read };
allow haproxy_t self:icmp_socket create;

#!!!! This avc can be allowed using the boolean 'cluster_use_execmem'
allow haproxy_t self:process execmem;
allow haproxy_t self:process setcap;
allow haproxy_t self:rawip_socket { create getopt setopt }

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 9 Cédric Jeanneret 2022-06-30 13:45:24 UTC
Hello there,

IMHO, the fix should be:

- add 2 new rules to the openstack-selinux[1], activated via a new boolean "haproxy_ping":
require {
        type haproxy_t;
        type ping_exec_t;
        class icmp_socket create;
        class file { execute execute_no_trans open read };
}
#============= haproxy_t ==============
allow haproxy_t ping_exec_t:file { execute execute_no_trans open read };
allow haproxy_t self:icmp_socket create;

- add an option the operator can toggle at the LB creation that will then toggle both haproxy_ping and cluster_use_execmem

That way, we:
- allow this specific usage
- but also ensure we're not enabling it by default everywhere

@jpichon any thoughts? We should also check what is actually available via the cluster_use_execmem - maybe it's something we should cover in the "haproxy_ping" if it opens too many things.

@gthiemon does it sounds doable? Especially the "pass a parameter at LB creation".

@dhill would this solution be OK? It would require recreating the LB in order to toggle the option, but since CU will have to redeploy the new image in order to get the new openstack-selinux, it shouldn't be that bad.. ?

Cheers,

C.

[1] https://github.com/redhat-openstack/openstack-selinux/blob/master/os-octavia.te

Comment 10 Michael Johnson 2022-06-30 18:25:46 UTC
I don't think there is a need to change the Octavia API here. You could toggle your selinux rules when the health monitor of type PING is created.

Comment 11 Julie Pichon 2022-07-04 08:16:28 UTC
(In reply to Cédric Jeanneret from comment #9)
> @jpichon any thoughts? We should also check what is actually
> available via the cluster_use_execmem - maybe it's something we should cover
> in the "haproxy_ping" if it opens too many things.

These rules behind a new boolean look fine to me. cluster_use_execmem is limited to allowing execmem for the cluster_domain process [1] and looks fine as well. 

"allow haproxy_t self:process setcap;" shows outside of the cluster_use_execmem boolean in my audit2allow output though, so we may have to add it under the custom boolean too? I'm not sure what's the selinux-policy version on the system from the description, it would be good to compare.

[1] https://github.com/fedora-selinux/selinux-policy/blob/9cb8de3f5d06e2624d728b97ab23d08321b0ad9a/policy/modules/contrib/rhcs.te#L132

Comment 12 Gregory Thiemonge 2022-07-04 08:46:24 UTC
> @gthiemon does it sounds doable? Especially the "pass a parameter
> at LB creation".

we could enable the SELinux policy exceptions at runtime when a HM of type PING is created, but I'm afraid that this code would not be fully tested upstream (default gates are based on ubuntu, we have also some centos jobs but selinux is configured as permissive).

we also have an existing patch to enable a SELinux boolean when building the amphora image (https://review.opendev.org/c/openstack/octavia/+/840315) I would rather keep those booleans in a single location.

Comment 36 errata-xmlrpc 2022-12-07 20:27:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.9 bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:8795


Note You need to log in before you can comment on or make changes to this bug.