Description of problem: Curling the metadata end point fails ~~~ $ curl 169.254.169.254/openstack/latest/meta_data.json <html><body><h1>503 Service Unavailable</h1> No server is available to handle this request. </body></html> ~~~ Looking at the original permissions: ~~~ [root@ulposp001 /root]# ls -al /var/lib/neutron total 64K drwxrwxr-x+ 7 42435 42435 4.0K Apr 1 11:47 ./ drwxr-xr-x. 94 root root 4.0K Mar 31 15:07 ../ drwxrwxr-x+ 30 42435 42435 4.0K Mar 31 02:44 dhcp/ -rwxrwxr-x+ 1 42435 42435 1.1K Mar 28 19:49 dhcp_haproxy_wrapper* -rwxrwxr-x+ 1 42435 42435 1.2K Mar 28 19:48 dibbler_wrapper* -rwxrwxr-x+ 1 42435 42435 1.0K Mar 28 19:49 dnsmasq_wrapper* drwxrwxr-x+ 3 42435 42435 18 Jan 27 2017 external/ drwxrwxr-x+ 14 42435 42435 4.0K Mar 26 07:01 ha_confs/ srwxrwxr-x+ 1 42435 42435 0 Mar 31 02:43 keepalived-state-change= -rwxrwxr-x+ 1 42435 42435 1.1K Mar 28 19:48 keepalived_state_change_wrapper* -rwxrwxr-x+ 1 42435 42435 1.1K Mar 28 19:48 keepalived_wrapper* -rwxrwxr-x+ 1 42435 42435 1.1K Mar 28 19:48 l3_haproxy_wrapper* drwxrwxr-x+ 2 42435 42435 16K Dec 9 22:09 lock/ srw-rwxr--+ 1 42435 42435 0 Apr 1 11:47 metadata_proxy= drwxrwxr-x+ 2 42435 42435 4.0K Mar 31 02:44 ns-metadata-proxy/ ~~~ To fix we need to run the following found here https://review.opendev.org/gitweb?p=openstack%2Ftripleo-heat-templates.git;a=commitdiff;h=818ad752f8b048217a0d5b76ea2c5f86714597f4 from BZ 1563443#c5 ~~~ setfacl -d -R -m u:42435:rwx /var/lib/neutron setfacl -R -m u:42435:rw /var/lib/neutron find /var/lib/neutron -type d -exec setfacl -m u:42435:rwx '{}' \; setfacl -m u:42435:rwx /var/lib/neutron/metadata_proxy setfacl -m u:42435:rwx /var/lib/neutron setfacl -m u:42435:rwx /var/lib/neutron/metadata_proxy setfacl -m u:42435:rwx /var/lib/neutron/keepalived-state-change setfacl -d -R -m u:42435:rwx /var/lib/neutron/metadata_proxy setfacl -d -R -m u:42435:rwx /var/lib/neutron/keepalived-state-change setfacl -d -R -m u:42435:rwx /var/lib/neutron ~~~ Which only works until the container is restarted and the permissions revert. How reproducible: Every time the container is restarted
I'm setting the component to openstack-neutron as this is a permissions issue related to deployment there.
Has this been resolved? I'm looking at build openstack-tripleo-heat-templates-8.4.1-42 from February that seems to include the patch that has been linked. I couldn't find the exact build where this patch was introduced. If it's not been resolved could you provide the version of openstack-tripleo-heat-templates that you have installed?
Hi Dan, Version installed is `openstack-tripleo-heat-templates-8.4.1-16.el7ost.noarch` We can fix the issue by running the following from https://review.opendev.org/gitweb?p=openstack%2Ftripleo-heat-templates.git;a=commitdiff;h=818ad752f8b048217a0d5b76ea2c5f86714597f4 ~~~ setfacl -d -R -m u:42435:rwx /var/lib/neutron setfacl -R -m u:42435:rw /var/lib/neutron find /var/lib/neutron -type d -exec setfacl -m u:42435:rwx '{}' \; setfacl -m u:42435:rwx /var/lib/neutron/metadata_proxy setfacl -m u:42435:rwx /var/lib/neutron setfacl -m u:42435:rwx /var/lib/neutron/metadata_proxy setfacl -m u:42435:rwx /var/lib/neutron/keepalived-state-change setfacl -d -R -m u:42435:rwx /var/lib/neutron/metadata_proxy setfacl -d -R -m u:42435:rwx /var/lib/neutron/keepalived-state-change setfacl -d -R -m u:42435:rwx /var/lib/neutron ~~~ But this only works until the container is restarted and the permissions revert.
Thanks ldenny, That package was built in Nov 2019. otoh that patch seems to be included in the package version you have installed. I'm wondering if this is related to the container build and not THT? Is there a way you could update to the latest containers in OSP 13? There have been atleast 2 Z releases GAed I can see since that package was built.
Hi Dan, We can't update the customer to the latest containers sadly. We are going to try pulling a fresh copy of the current openstack-neutron-metadata-agent:13.0-106 container and launch it with paunch to see if that helps.
Any updates here? What if we close this out and you can reopen it if you need more assistance
Hi Dan, Let's do that, I will close and reopen if needed. Cheers.