Bug 1738134
Summary: | OSP15 | sensu | containers health check reports "Failed to connect to bus" instead of reporting health status of containers. | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Martin Magr <mmagr> | ||||
Component: | openstack-selinux | Assignee: | Julie Pichon <jpichon> | ||||
Status: | CLOSED ERRATA | QA Contact: | Nataf Sharabi <nsharabi> | ||||
Severity: | urgent | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 15.0 (Stein) | CC: | apannu, cjeanner, jbadiapa, lars, lhh, lnatapov, lvrabec, mmagr, mrunge, rmccabe, scorcora, zcaplovi | ||||
Target Milestone: | rc | Keywords: | Regression, Triaged | ||||
Target Release: | 15.0 (Stein) | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | openstack-selinux-0.8.19-0.20190813150447.72046d3.el8ost | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | 1728226 | Environment: | |||||
Last Closed: | 2019-09-21 11:24:21 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1728226 | ||||||
Attachments: |
|
Description
Martin Magr
2019-08-06 13:13:35 UTC
Created attachment 1601016 [details]
audit.log
type=AVC msg=audit(1565098666.014:137948): avc: denied { connectto } for pid=219615 comm="systemctl" path="/run/systemd/private" scontext=system_u:system_r:container_t:s0:c104,c864 tcontext=system_u:system_r:init_t:s0 tclass=unix_stream_socket permissive=1 type=SYSCALL msg=audit(1565098666.014:137948): arch=c000003e syscall=42 success=yes exit=0 a0=3 a1=55efbfd4cb70 a2=16 a3=7fff69376be0 items=0 ppid=219606 pid=219615 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="systemctl" exe="/usr/bin/systemctl" subj=system_u:system_r:container_t:s0:c104,c864 key=(null)ARCH=x86_64 SYSCALL=connect AUID="heat-admin" UID="root" GID="root" EUID="root" SUID="root" FSUID="root" EGID="root" SGID="root" FSGID="root" Audit log contains a lot of AVC denials from health check runs. Above seems to me as the log you are looking for. Ok, to summarize that, I ran podman command from description and right after it finished I ran: [root@controller-0 ~]# egrep -i "system(ctl|d)" /var/log/audit/audit.log | grep -i avc considering the yum install prior to systemctl execution I think following is relevant as all (except the one before the last) is from pid=304402: type=AVC msg=audit(1565099541.252:139337): avc: denied { write } for pid=304402 comm="yum" path="/usr/lib/systemd/boot/efi/linuxx64.efi.stub;5d498615" dev="vda2" ino=228359 scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:lib_t:s0 tclass=file permissive=1 type=AVC msg=audit(1565099541.259:139340): avc: denied { write } for pid=304402 comm="yum" name="system" dev="vda2" ino=4539392 scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=dir permissive=1 type=AVC msg=audit(1565099541.259:139340): avc: denied { add_name } for pid=304402 comm="yum" name="cryptsetup-pre.target;5d498615" scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=dir permissive=1 type=AVC msg=audit(1565099541.259:139340): avc: denied { create } for pid=304402 comm="yum" name="cryptsetup-pre.target;5d498615" scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=file permissive=1 type=AVC msg=audit(1565099541.259:139340): avc: denied { write open } for pid=304402 comm="yum" path="/usr/lib/systemd/system/cryptsetup-pre.target;5d498615" dev="vda2" ino=5931842 scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=file permissive=1 type=AVC msg=audit(1565099541.260:139341): avc: denied { setattr } for pid=304402 comm="yum" name="cryptsetup-pre.target;5d498615" dev="vda2" ino=5931842 scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=file permissive=1 type=AVC msg=audit(1565099541.260:139342): avc: denied { remove_name } for pid=304402 comm="yum" name="cryptsetup-pre.target;5d498615" dev="vda2" ino=5931842 scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=dir permissive=1 type=AVC msg=audit(1565099541.260:139342): avc: denied { rename } for pid=304402 comm="yum" name="cryptsetup-pre.target;5d498615" dev="vda2" ino=5931842 scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=file permissive=1 type=AVC msg=audit(1565099541.260:139342): avc: denied { unlink } for pid=304402 comm="yum" name="cryptsetup-pre.target" dev="vda2" ino=5931843 scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=file permissive=1 type=AVC msg=audit(1565099541.261:139343): avc: denied { create } for pid=304402 comm="yum" name="systemd-remount-fs.service;5d498615" scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=lnk_file permissive=1 type=AVC msg=audit(1565099541.261:139344): avc: denied { setattr } for pid=304402 comm="yum" name="systemd-remount-fs.service;5d498615" dev="vda2" ino=228408 scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=lnk_file permissive=1 type=AVC msg=audit(1565099541.261:139345): avc: denied { rename } for pid=304402 comm="yum" name="systemd-remount-fs.service;5d498615" dev="vda2" ino=228408 scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=lnk_file permissive=1 type=AVC msg=audit(1565099541.261:139345): avc: denied { unlink } for pid=304402 comm="yum" name="systemd-remount-fs.service" dev="vda2" ino=228409 scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=lnk_file permissive=1 type=AVC msg=audit(1565099541.265:139346): avc: denied { setattr } for pid=304402 comm="yum" name="systemd-udev-trigger.service.d" dev="vda2" ino=6465377 scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=dir permissive=1 type=AVC msg=audit(1565099541.685:139359): avc: denied { connectto } for pid=305401 comm="systemctl" path="/run/systemd/private" scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:system_r:init_t:s0 tclass=unix_stream_socket permissive=1 type=AVC msg=audit(1565099541.686:139360): avc: denied { execute_no_trans } for pid=305402 comm="sh" path="/usr/lib/systemd/systemd-random-seed" dev="vda2" ino=1233861 scontext=system_u:system_r:container_t:s0:c243,c493 tcontext=system_u:object_r:lib_t:s0 tclass=file permissive=1 You can ignore yum AVCs are this is caused by installing new systemd-udev in container than is in the host and container has /usr/lib/systemd shared. Thank you for providing access to the machine with all the logs. Very strangely, the AVC denial with system_dbusd_t from comment 0 disappears as soon as SELinux was set to permissive. Did you do anything else at the same time permissive mode was set? Maybe that yum install? Normally there shouldn't be a difference... The init_t denial in comment 3 however shows up all the time whether in permissive mode or not, when calling that healtcheck timer command. Before we add a rule for init_t, I wonder: did you try to mount the systemd volume with :z enabled already? Something like --volume=/usr/lib/systemd:/usr/lib/systemd:z. It's documented at https://docs.docker.com/storage/bind-mounts/#configure-the-selinux-label and I've seen it resolve issues with "permission denied" in host/container communications before. Though there are serious warnings that go with using the label and I'm not sure if it would work as well on a /usr/lib directory. No the only thing I was doing was the podman command and setenforce 0/1. Oh yes, I tried --volume=/usr/lib/systemd:/usr/lib/systemd:ro,z --volume=/usr/lib/systemd:/usr/lib/systemd:rw,z , but podman is complaining that especially that path is forbidden to relabel. Ok, thank you for the answer! Proposed rules update at https://github.com/redhat-openstack/openstack-selinux/pull/35 Selinux problem of this issue has been solved: [root@controller-0 ~]# podman run --systemd --network=host --volume=/etc/hosts:/etc/hosts:ro --volume=/etc/localtime:/etc/localtime:ro --volume=/dev/log:/dev/log --volume=/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume=/etc/puppet:/etc/puppet:ro --volume=/var/log/journal:/var/log/journal:ro --volume=/sys/fs/cgroup:/sys/fs/cgroup --volume=/run/dbus/system_bus_socket://run/dbus/system_bus_socket:rw,z --volume=/run:/run --volume=/usr/lib/systemd:/usr/lib/systemd:rw --volume=/var/lib/kolla/config_files/sensu-client.json:/var/lib/kolla/config_files/config.json:ro --volume=/var/lib/config-data/puppet-generated/sensu/:/var/lib/kolla/config_files/src:ro --volume=/var/log/containers/sensu:/var/log/sensu:rw,z 2cc104aac809 systemctl list-timers --no-pager --no-legend "tripleo*healthcheck.timer" Failed to connect to bus: No data available Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:2811 |