Bug 1778881
Summary: | Sorry, user {cinder,nova,heat} is not allowed to execute '/usr/sbin/ss -ntuap' as ... on controller-0. | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Chuck Short <chshort> |
Component: | python-paunch | Assignee: | Cédric Jeanneret <cjeanner> |
Status: | CLOSED ERRATA | QA Contact: | David Rosenfeld <drosenfe> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 16.0 (Train) | CC: | bdobreli, cjeanner, drosenfe, eharney, emacchi, ggrasza, j.thadden, mburns, michele, slinaber |
Target Milestone: | z1 | Keywords: | Triaged |
Target Release: | 16.0 (Train on RHEL 8.1) | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | python-paunch-5.3.1-1 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-03-03 09:45:05 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Chuck Short
2019-12-02 17:40:30 UTC
Does this cause a failure or just log the message? I believe it just logs the mesage. Answering to Eric: currently it doesn't cause a failure, because the heahtlchecks aren't strict (missing pipefail and other options). But it should, and will once https://bugzilla.redhat.com/show_bug.cgi?id=1794044 is sorted out... Proposal: adding --user root to the healthcheck command so that we can hopefully make things lighter, faster and easier. It should probably NOT create a security hole. And, after some thoughts, it's better than adding sudo rights. I would rather configure services' rootwrap.d to allow /usr/sbin/ss -ntuap (and nothing else) My thinking is that rootwrap.d would open the ss command to be run by anything on the system, with no timing/frequency limit, whereas the health-check script is self-contained and (like proposed rootwrap.d solution) doesn't use external inputs. The original message is no longer seen in logs: sudo more /var/log/messages | grep Sorry In addition the files tripleo_*_healthcheck.service in /etc/systemd/system have been updated to include --user root in ExecStart line: [stack@undercloud-0 system]$ pwd /etc/systemd/system more tripleo_*_healthcheck.service | grep ExecStart ExecStart=/usr/bin/podman exec --user root glance_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root heat_api_cron /usr/share/openstack-tripleo-common/healthcheck/cron heat ExecStart=/usr/bin/podman exec --user root heat_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root heat_engine /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root ironic_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root ironic_conductor /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root ironic_inspector_dnsmasq /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root ironic_inspector /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root ironic_neutron_agent /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root ironic_pxe_http /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root ironic_pxe_tftp /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root iscsid /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root keystone /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root logrotate_crond /usr/share/openstack-tripleo-common/healthcheck/cron ExecStart=/usr/bin/podman exec --user root memcached /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root mistral_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root mistral_engine /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root mistral_event_engine /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root mistral_executor /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root mysql /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root neutron_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root neutron_dhcp /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root neutron_l3_agent /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root neutron_ovs_agent /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root nova_api_cron /usr/share/openstack-tripleo-common/healthcheck/cron nova ExecStart=/usr/bin/podman exec --user root nova_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root nova_compute /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root nova_conductor /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root nova_scheduler /openstack/healthcheck 5672 ExecStart=/usr/bin/podman exec --user root placement_api /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root rabbitmq /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root swift_account_server /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root swift_container_server /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root swift_object_server /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root swift_proxy /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root swift_rsync /openstack/healthcheck ExecStart=/usr/bin/podman exec --user root zaqar /usr/share/openstack-tripleo-common/healthcheck/zaqar-api ExecStart=/usr/bin/podman exec --user root zaqar_websocket /usr/share/openstack-tripleo-common/healthcheck/zaqar-api 9000 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0655 *** Bug 1805857 has been marked as a duplicate of this bug. *** |