Bug 1715492 - RHEL8 based amphora: can't start haproxy with SELinux enforcing
Summary: RHEL8 based amphora: can't start haproxy with SELinux enforcing
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-selinux
Version: 15.0 (Stein)
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: beta
: 15.0 (Stein)
Assignee: Julie Pichon
QA Contact: nlevinki
URL:
Whiteboard:
Depends On:
Blocks: 1623857
TreeView+ depends on / blocked
 
Reported: 2019-05-30 13:33 UTC by Nir Magnezi
Modified: 2019-10-28 07:37 UTC (History)
7 users (show)

Fixed In Version: openstack-selinux-0.8.19-0.20190606150404.06faac7.el8ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-21 11:22:34 UTC
Target Upstream Version:
nmagnezi: needinfo-


Attachments (Terms of Use)
audit.log (32.12 KB, text/plain)
2019-05-30 13:33 UTC, Nir Magnezi
no flags Details
messages.log (278.33 KB, text/plain)
2019-06-02 08:19 UTC, Nir Magnezi
no flags Details
journalctl.log (281.06 KB, text/plain)
2019-06-05 12:52 UTC, Nir Magnezi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:2811 0 None None None 2019-09-21 11:22:55 UTC

Description Nir Magnezi 2019-05-30 13:33:01 UTC
Created attachment 1575229 [details]
audit.log

Description of problem:
=======================
Found while testing with OpenStack Octavia.
Octavia runs an RHEL8 service VM (named Amphora) with haproxy.

haproxy fails to start when SELinux is set to enforcing with the following:

type=AVC msg=audit(1559218642.208:72): avc:  denied  { dac_override } for  pid=6702 comm="haproxy" capability=1  scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:system_r:haproxy_t:s0 tclass=capability permissive=1

Version-Release number of selected component (if applicable):
=============================================================
OSP15

How reproducible:
=================
Always

Steps to Reproduce:
===================
1. Create a loadbalancer via Octavia (Topology: Single)
2. Create a loadbalancer listener with any TCP port. That would start haproxy.


Actual results:
openstack-selinux-0.8.19-0.20190515180355.e1c7511.el8ost.noarch
haproxy-1.8.12-2




audit2allow:

#============= haproxy_t ==============
allow haproxy_t self:capability dac_override;



audit2why:

type=AVC msg=audit(1559218642.208:72): avc:  denied  { dac_override } for  pid=6702 comm="haproxy" capability=1  scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:system_r:haproxy_t:s0 tclass=capability permissive=1

    Was caused by:
        Missing type enforcement (TE) allow rule.

        You can use audit2allow to generate a loadable module to allow this access.

type=AVC msg=audit(1559218968.594:104): avc:  denied  { dac_override } for  pid=6702 comm="haproxy" capability=1  scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:system_r:haproxy_t:s0 tclass=capability permissive=1

    Was caused by:
        Missing type enforcement (TE) allow rule.

        You can use audit2allow to generate a loadable module to allow this access.

type=AVC msg=audit(1559219031.182:107): avc:  denied  { dac_override } for  pid=6702 comm="haproxy" capability=1  scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:system_r:haproxy_t:s0 tclass=capability permissive=1

    Was caused by:
        Missing type enforcement (TE) allow rule.

        You can use audit2allow to generate a loadable module to allow this access.

Comment 1 Nir Magnezi 2019-05-30 13:52:08 UTC
Proposed PR based on comment 0: https://github.com/redhat-openstack/openstack-selinux/pull/33

Comment 2 Julie Pichon 2019-05-30 13:58:44 UTC
This looks somewhat similar to bug 1597076, although that one looks to be fixed as of haproxy-1.8.12-2 which is older and was also backported downstream.

Ryan, since you worked on that other bug I am wondering if you can think of something else we might want to check here, that could be causing the same dac_override problem?

In the meantime, we can update openstack-selinux with that new rule to resolve the issue for Octavia and unblock testing.

Comment 3 Ryan O'Hara 2019-05-30 19:53:23 UTC
(In reply to Julie Pichon from comment #2)
> This looks somewhat similar to bug 1597076, although that one looks to be
> fixed as of haproxy-1.8.12-2 which is older and was also backported
> downstream.
> 
> Ryan, since you worked on that other bug I am wondering if you can think of
> something else we might want to check here, that could be causing the same
> dac_override problem?

No, I can't think of anything else. You will need to modify the policy or fix the owner/group/permissions on the directory, assuming that it is the same problem with /var/lib/haproxy/ as in #1597076. I can't tell because I only see with AVC. The haproxy logs should point you in the right direction.

Comment 4 Julie Pichon 2019-05-31 08:17:04 UTC
Thanks, Ryan!

Nir, would you be able to attach the haproxy logs as well? From the other bug, looking at /var/log/messages around the AVC may be helpful for connecting the AVC denial to the haproxy issue more directly. If we could run ls -Z on any file mentioned there, this should help with figuring out the best next step. Thank you.

Comment 5 Nir Magnezi 2019-06-02 08:19:47 UTC
Created attachment 1576227 [details]
messages.log

(In reply to Julie Pichon from comment #4)
> Thanks, Ryan!
> 
> Nir, would you be able to attach the haproxy logs as well? From the other
> bug, looking at /var/log/messages around the AVC may be helpful for
> connecting the AVC denial to the haproxy issue more directly. If we could
> run ls -Z on any file mentioned there, this should help with figuring out
> the best next step. Thank you.

Sure, attached.

[root@amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec log]# ls -Z /usr/sbin/haproxy
system_u:object_r:haproxy_exec_t:s0 /usr/sbin/haproxy

based on bug 1597076, I noticed they refer to the stats file in haproxy.cfg, so captured that as well:

[root@amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec log]# cat /var/lib/octavia/2b9024d6-afbc-4677-a9c8-41a0d1cf16f2/haproxy.cfg | grep stats
    stats socket /var/lib/octavia/2b9024d6-afbc-4677-a9c8-41a0d1cf16f2.sock mode 0666 level user
[root@amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec log]# ls -Z /var/lib/octavia/2b9024d6-afbc-4677-a9c8-41a0d1cf16f2.sock
system_u:object_r:var_lib_t:s0 /var/lib/octavia/2b9024d6-afbc-4677-a9c8-41a0d1cf16f2.sock

Comment 6 Nir Magnezi 2019-06-02 08:23:37 UTC
Additionally, just because it was mentioned in the log:

[root@amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec log]# ls -Z /usr/lib/python3.6/site-packages/octavia/amphorae/backends/agent/api_server/listener.py
system_u:object_r:lib_t:s0 /usr/lib/python3.6/site-packages/octavia/amphorae/backends/agent/api_server/listener.py

Comment 7 Julie Pichon 2019-06-04 11:45:53 UTC
Thanks Nir!

Are there also any log files specific to haproxy itself, by any chance? Unfortunately it's not completely clear to me which file in particular may be causing the AVC denial just from messages, though the config file might be a good candidate as that seems to happen after reloading...

May 30 08:22:43 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec systemd[1]: Reloading HAProxy Load Balancer.
May 30 08:22:43 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec ip[6702]: [WARNING] 149/081754 (6702) : Reexecuting Master process
May 30 08:22:48 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec ip[6702]: [WARNING] 149/082243 (6702) : [/usr/sbin/haproxy.main()] Cannot raise FD limit to 2500031, limit is 2097152.
May 30 08:22:48 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec ip[6702]: [WARNING] 149/082243 (6702) : [/usr/sbin/haproxy.main()] FD limit (2097152) too low for maxconn=1000000/maxsock=2500031. Please raise 'ulimit-n' to 2500031 or more to avoid any trouble.
May 30 08:22:51 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec dbus-daemon[549]: [system] Activating service name='org.fedoraproject.Setroubleshootd' requested by ':1.76' (uid=0 pid=473 comm="/usr/sbin/sedispatch " label="system_u:system_r:auditd_t:s0") (using servicehelper)
May 30 08:22:56 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec systemd[1]: Reloaded HAProxy Load Balancer.
May 30 08:22:59 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec ip[6702]: [WARNING] 149/082243 (6702) : Former worker 6774 exited with code 0
May 30 08:23:00 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec dbus-daemon[549]: [system] Successfully activated service 'org.fedoraproject.Setroubleshootd'
May 30 08:23:01 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec setroubleshoot[6915]: SELinux is preventing /usr/sbin/haproxy from using the dac_override capability. For complete SELinux messages run: sealert -l 8468f528-6511-4e0c-a32e-cb0a5f3ad3c8
May 30 08:23:01 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec platform-python[6915]: SELinux is preventing /usr/sbin/haproxy from using the dac_override capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that haproxy should have the dac_override capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'haproxy' --raw | audit2allow -M my-haproxy#012# semodule -X 300 -i my-haproxy.pp#012

Comment 8 Nir Magnezi 2019-06-05 12:46:48 UTC
(In reply to Julie Pichon from comment #7)
> Thanks Nir!
> 
> Are there also any log files specific to haproxy itself, by any chance?
> Unfortunately it's not completely clear to me which file in particular may
> be causing the AVC denial just from messages, though the config file might
> be a good candidate as that seems to happen after reloading...
> 
> May 30 08:22:43 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec systemd[1]:
> Reloading HAProxy Load Balancer.
> May 30 08:22:43 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec ip[6702]:
> [WARNING] 149/081754 (6702) : Reexecuting Master process
> May 30 08:22:48 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec ip[6702]:
> [WARNING] 149/082243 (6702) : [/usr/sbin/haproxy.main()] Cannot raise FD
> limit to 2500031, limit is 2097152.
> May 30 08:22:48 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec ip[6702]:
> [WARNING] 149/082243 (6702) : [/usr/sbin/haproxy.main()] FD limit (2097152)
> too low for maxconn=1000000/maxsock=2500031. Please raise 'ulimit-n' to
> 2500031 or more to avoid any trouble.
> May 30 08:22:51 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec
> dbus-daemon[549]: [system] Activating service
> name='org.fedoraproject.Setroubleshootd' requested by ':1.76' (uid=0 pid=473
> comm="/usr/sbin/sedispatch " label="system_u:system_r:auditd_t:s0") (using
> servicehelper)
> May 30 08:22:56 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec systemd[1]:
> Reloaded HAProxy Load Balancer.
> May 30 08:22:59 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec ip[6702]:
> [WARNING] 149/082243 (6702) : Former worker 6774 exited with code 0
> May 30 08:23:00 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec
> dbus-daemon[549]: [system] Successfully activated service
> 'org.fedoraproject.Setroubleshootd'
> May 30 08:23:01 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec
> setroubleshoot[6915]: SELinux is preventing /usr/sbin/haproxy from using the
> dac_override capability. For complete SELinux messages run: sealert -l
> 8468f528-6511-4e0c-a32e-cb0a5f3ad3c8
> May 30 08:23:01 amphora-f9bdbe21-a2b6-4536-9717-d9cf30d184ec
> platform-python[6915]: SELinux is preventing /usr/sbin/haproxy from using
> the dac_override capability.#012#012*****  Plugin dac_override (91.4
> confidence) suggests   **********************#012#012If you want to help
> identify if domain needs this access or you have a file with the wrong
> permissions on your system#012Then turn on full auditing to get path
> information about the offending file and generate the error
> again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p
> w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If
> you see PATH record check ownership/permissions on file, and fix
> it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59
> confidence) suggests   **************************#012#012If you believe that
> haproxy should have the dac_override capability by default.#012Then you
> should report this as a bug.#012You can generate a local policy module to
> allow this access.#012Do#012allow this access for now by executing:#012#
> ausearch -c 'haproxy' --raw | audit2allow -M my-haproxy#012# semodule -X 300
> -i my-haproxy.pp#012


There is no dedicated log so captured from journalctl:

[root@amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473 audit]# journalctl |  grep haproxy
Jun 05 08:19:56 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal amphora-agent[854]: 2019-06-05 08:19:56.694 1262 DEBUG octavia.amphorae.backends.agent.api_server.listener [-] Found init system: systemd upload_haproxy_config /usr/lib/python3.6/site-packages/octavia/amphorae/backends/agent/api_server/listener.py:157
Jun 05 08:19:58 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal systemd[1]: Starting Configure amphora-haproxy network namespace...
Jun 05 08:19:58 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal ip[6639]: Cannot create namespace file "/var/run/netns/amphora-haproxy": File exists
Jun 05 08:19:58 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal systemd[1]: Started Configure amphora-haproxy network namespace.
Jun 05 08:19:59 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal ip[6759]: [WARNING] 155/081958 (6759) : [/usr/sbin/haproxy.main()] Cannot raise FD limit to 2500031, limit is 2097152.
Jun 05 08:19:59 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6759]: Proxy 82ecc5fd-f594-4d66-be75-54be2fade13f started.
Jun 05 08:19:59 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal ip[6759]: [WARNING] 155/081958 (6759) : [/usr/sbin/haproxy.main()] FD limit (2097152) too low for maxconn=1000000/maxsock=2500031. Please raise 'ulimit-n' to 2500031 or more to avoid any trouble.
Jun 05 08:19:59 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6759]: Proxy 82ecc5fd-f594-4d66-be75-54be2fade13f started.
Jun 05 08:20:07 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal setroubleshoot[6765]: SELinux is preventing /usr/sbin/haproxy from using the dac_override capability. For complete SELinux messages run: sealert -l 20a42272-3d45-4e02-971d-cb61aedc629c
Jun 05 08:20:07 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal platform-python[6765]: SELinux is preventing /usr/sbin/haproxy from using the dac_override capability.
                                                                                              If you believe that haproxy should have the dac_override capability by default.
                                                                                              # ausearch -c 'haproxy' --raw | audit2allow -M my-haproxy
                                                                                              # semodule -X 300 -i my-haproxy.pp
Jun 05 08:20:09 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal amphora-agent[854]: 2019-06-05 08:20:09.092 1262 DEBUG octavia.amphorae.backends.agent.api_server.listener [-] Found init system: systemd upload_haproxy_config /usr/lib/python3.6/site-packages/octavia/amphorae/backends/agent/api_server/listener.py:157
Jun 05 08:20:09 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6828]: Configuration file is valid
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal ip[6759]: [WARNING] 155/082009 (6759) : [/usr/sbin/haproxy.main()] Cannot raise FD limit to 2500031, limit is 2097152.
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6759]: Proxy 82ecc5fd-f594-4d66-be75-54be2fade13f started.
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6759]: Proxy 82ecc5fd-f594-4d66-be75-54be2fade13f started.
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6759]: Proxy 2c231f9c-3604-4082-b1b6-7cde4bb926bf started.
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6759]: Proxy 2c231f9c-3604-4082-b1b6-7cde4bb926bf started.
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal ip[6759]: [WARNING] 155/082009 (6759) : [/usr/sbin/haproxy.main()] FD limit (2097152) too low for maxconn=1000000/maxsock=2500031. Please raise 'ulimit-n' to 2500031 or more to avoid any trouble.
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal ip[6759]: [ALERT] 155/082009 (6759) : [/usr/sbin/haproxy.main()] Cannot fork.
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6761]: Stopping frontend 82ecc5fd-f594-4d66-be75-54be2fade13f in 0 ms.
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6761]: Stopping frontend 82ecc5fd-f594-4d66-be75-54be2fade13f in 0 ms.
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6761]: Proxy 82ecc5fd-f594-4d66-be75-54be2fade13f stopped (FE: 0 conns, BE: 0 conns).
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6761]: Proxy 82ecc5fd-f594-4d66-be75-54be2fade13f stopped (FE: 0 conns, BE: 0 conns).
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal ip[6759]: Usage : haproxy [-f <cfgfile|cfgdir>]* [ -vdVD ] [ -n <maxconn> ] [ -N <maxpconn> ]
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal ip[6759]: Copyright 2000-2018 Willy Tarreau <willy@haproxy.org>
Jun 05 08:20:12 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal systemd[1]: haproxy-82ecc5fd-f594-4d66-be75-54be2fade13f.service: Main process exited, code=exited, status=1/FAILURE
Jun 05 08:21:40 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal systemd[1]: haproxy-82ecc5fd-f594-4d66-be75-54be2fade13f.service: Reload operation timed out. Killing reload process.
Jun 05 08:21:40 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal systemd[1]: haproxy-82ecc5fd-f594-4d66-be75-54be2fade13f.service: Failed with result 'exit-code'.
Jun 05 08:21:40 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal amphora-agent[854]: 2019-06-05 08:21:40.193 1262 ERROR flask.app subprocess.CalledProcessError: Command '['/usr/sbin/service', 'haproxy-82ecc5fd-f594-4d66-be75-54be2fade13f', 'reload']' returned non-zero exit status 1.
Jun 05 08:21:40 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal systemd[1]: haproxy-82ecc5fd-f594-4d66-be75-54be2fade13f.service: Service RestartSec=100ms expired, scheduling restart.
Jun 05 08:21:40 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal systemd[1]: haproxy-82ecc5fd-f594-4d66-be75-54be2fade13f.service: Scheduled restart job, restart counter is at 1.
Jun 05 08:21:50 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal ip[6832]: [WARNING] 155/082140 (6832) : [/usr/sbin/haproxy.main()] Cannot raise FD limit to 2500031, limit is 2097152.
Jun 05 08:21:50 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6832]: Proxy 82ecc5fd-f594-4d66-be75-54be2fade13f started.
Jun 05 08:21:50 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal ip[6832]: [WARNING] 155/082140 (6832) : [/usr/sbin/haproxy.main()] FD limit (2097152) too low for maxconn=1000000/maxsock=2500031. Please raise 'ulimit-n' to 2500031 or more to avoid any trouble.
Jun 05 08:21:50 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6832]: Proxy 82ecc5fd-f594-4d66-be75-54be2fade13f started.
Jun 05 08:21:50 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6832]: Proxy 2c231f9c-3604-4082-b1b6-7cde4bb926bf started.
Jun 05 08:21:50 amphora-4cb1feef-71da-4258-9a37-8f6ef6bed473.novalocal haproxy[6832]: Proxy 2c231f9c-3604-4082-b1b6-7cde4bb926bf started.

Comment 9 Nir Magnezi 2019-06-05 12:52:25 UTC
Created attachment 1577571 [details]
journalctl.log

Full journalctl log

Comment 10 Nir Magnezi 2019-06-09 11:47:15 UTC
PR: https://github.com/redhat-openstack/openstack-selinux/pull/33 megred.

Comment 13 Julie Pichon 2019-06-13 15:49:33 UTC
Thanks for the logs Nir. I merged the PR to unblock automation for now although it would be good to get this fixed properly if we can. It seems the journal tells us how to turn full auditing on to find the file causing the issue:
--
If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
Then turn on full auditing to get path information about the offending file and generate the error again.

Turn on full auditing
# auditctl -w /etc/shadow -p w
Try to recreate AVC. Then execute
# ausearch -m avc -ts recent
If you see PATH record check ownership/permissions on file
--
If you have a chance, could you downgrade openstack-selinux to a version that doesn't contain the patch, and run the steps above? Thank you.

It looks like sealert -l 8468f528-6511-4e0c-a32e-cb0a5f3ad3c8 (or whatever id the logs suggest during the new try) may provide more information as well, not sure if that'd include the file/path info though.

Comment 16 errata-xmlrpc 2019-09-21 11:22:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811


Note You need to log in before you can comment on or make changes to this bug.