On OSP16 /usr/sbin/ss is not found in rootwrap: undercloud.ctlplane.localdomain:8787/rh-osbs/rhosp16-openstack-cinder-scheduler:20191126.1, name=cinder_scheduler) Dec 1 03:31:13 controller-0 podman[868579]: Sorry, user cinder is not allowed to execute '/usr/sbin/ss -ntuap' as cinder on controller-0. Dec 1 03:31:13 controller-0 systemd[1]: Started cinder_scheduler healthcheck. This install is using RHOS_TRUNK-16.0-RHEL-8-20191126.n.2
Does this cause a failure or just log the message?
I believe it just logs the mesage.
Answering to Eric: currently it doesn't cause a failure, because the heahtlchecks aren't strict (missing pipefail and other options). But it should, and will once https://bugzilla.redhat.com/show_bug.cgi?id=1794044 is sorted out...
Proposal: adding --user root to the healthcheck command so that we can hopefully make things lighter, faster and easier. It should probably NOT create a security hole. And, after some thoughts, it's better than adding sudo rights.
I would rather configure services' rootwrap.d to allow /usr/sbin/ss -ntuap (and nothing else)
My thinking is that rootwrap.d would open the ss command to be run by anything on the system, with no timing/frequency limit, whereas the health-check script is self-contained and (like proposed rootwrap.d solution) doesn't use external inputs.
The original message is no longer seen in logs: sudo more /var/log/messages | grep Sorry In addition the files tripleo_*_healthcheck.service in /etc/systemd/system have been updated to include --user root in ExecStart line: [stack@undercloud-0 system]$ pwd /etc/systemd/system more tripleo_*_healthcheck.service | grep ExecStart ExecStart=/usr/bin/podman exec --user root glance_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root heat_api_cron /usr/share/openstack-tripleo-common/healthcheck/cron heat ExecStart=/usr/bin/podman exec --user root heat_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root heat_engine /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root ironic_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root ironic_conductor /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root ironic_inspector_dnsmasq /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root ironic_inspector /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root ironic_neutron_agent /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root ironic_pxe_http /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root ironic_pxe_tftp /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root iscsid /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root keystone /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root logrotate_crond /usr/share/openstack-tripleo-common/healthcheck/cron ExecStart=/usr/bin/podman exec --user root memcached /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root mistral_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root mistral_engine /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root mistral_event_engine /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root mistral_executor /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root mysql /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root neutron_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root neutron_dhcp /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root neutron_l3_agent /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root neutron_ovs_agent /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root nova_api_cron /usr/share/openstack-tripleo-common/healthcheck/cron nova ExecStart=/usr/bin/podman exec --user root nova_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root nova_compute /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root nova_conductor /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root nova_scheduler /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root placement_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root rabbitmq /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root swift_account_server /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root swift_container_server /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root swift_object_server /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root swift_proxy /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root swift_rsync /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root zaqar /usr/share/openstack-tripleo-common/healthcheck/zaqar-api ExecStart=/usr/bin/podman exec --user root zaqar_websocket /usr/share/openstack-tripleo-common/healthcheck/zaqar-api 9000
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0655
*** Bug 1805857 has been marked as a duplicate of this bug. ***