Description of problem: In RHOSP13, we deploy neutron-haproxy container to run haproxy to forward metadata request coming from instance to neutron-metadata-proxy. In the haproxy configuration which is generated automatically by neutron, it expect that /dev/log can be accessible from haproxy running inside the container. ~~~ [heat-admin@controller-1 ~]$ head -4 /var/lib/neutron/ns-metadata-proxy/70fd01de-9b5c-4e96-a717-53ac64281f5f.conf global log /dev/log local0 debug log-tag haproxy-metadata-proxy-70fd01de-9b5c-4e96-a717-53ac64281f5f ~~~ However, in fact, neutron-haproxy container does not have access to host /dev/log. As a result logs coming from haproxy are not recorded anywhere, while these logs can be useful to check gateway timeout issue in haproxy. ~~~ [heat-admin@controller-1 ~]$ sudo docker ps | grep neutron-haproxy 25958cb4a60e 192.168.24.1:8787/rhosp13/openstack-neutron-l3-agent:2019-03-18.1-grades "ip netns exec qro..." 2 weeks ago Up 2 weeks neutron-haproxy-qrouter-70fd01de-9b5c-4e96-a717-53ac64281f5f [heat-admin@controller-1 ~]$ sudo docker exec -t 25958cb4a60e ls /dev/log ls: cannot access /dev/log: No such file or directory [heat-admin@controller-1 ~]$ sudo docker inspect 25958cb4a60e | grep -A 5 Binds "Binds": [ "/var/lib/config-data/puppet-generated/neutron/etc/neutron:/etc/neutron:ro", "/run/netns:/run/netns:shared", "/var/lib/neutron:/var/lib/neutron" ], "ContainerIDFile": "", [heat-admin@controller-1 ~]$ ~~~ Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Deploy overcloud 2. Create router and check neutrion-haproxy-qrouter container running on some controller nodes 3. Deploy instance which requires access to metadata. You do not see any logs there. Actual results: Log from haproxy running inside neutron-haproxy container is not recorded Expected results: Log from haproxy running inside neutron-haproxy container is recorded Additional info:
I found the following patch to fix this issue is already backported to stable/queens. https://review.opendev.org/#/c/648128/
Hello Takashi, After a quick check, the patch wasn't backported downstream - it's now started (check https://code.engineering.redhat.com/gerrit/169711 ) Does your issue need some hot-fix, or can it wait for the next Z-Stream (end of June)? Cheers, C.
Me again, I've checked the wrong branch - Queens (osp-13) does have the patch since end of March, meaning it should be probably present in the latest z-stream (or in the next). What's missing is the patch for OSP-14 (Rocky) - I've started the upstream backport already (https://review.opendev.org/657788) and will backport it downstream once it's merged. Cheers, C.
Hi Cédric, Thank you for checking the current status. The pasted logs are taken in my deployment with RHOSP13z5, so I think it is not yet included in the released version. ~~~ [heat-admin@controller-0 ~]$ cat /etc/rhosp-release Red Hat OpenStack Platform release 13.0.5 (Queens) ~~~ I can wait for next z-stream, as this is not directly required now, but surely helpful to do RCA for metadata issue. Thank you, Takashi
Hi Cedric, Sorry, we already have z6 released, and the fix was included there. So we do not need any more activity about RHOSP13, while we need the same backport for RHOSP14. Thank you, Takashi
Hello Takashi, I've got the backport merged upstream and started it downstream, should be merged by the end of the week. And included in the next z-stream, end of June. Cheers, C.
*** Bug 1714092 has been marked as a duplicate of this bug. ***
Verified on compose 2019-06-10.3 [heat-admin@controller-0 ~]$ sudo docker ps | grep neutron-haproxy 5b2e4703e6d0 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2019-06-10.3 "ip netns exec qro..." 51 minutes ago Up 51 minutes neutron-haproxy-qrouter-c8cadb82-566b-4dbe-8ae6-fab39b97ce6e [heat-admin@controller-0 ~]$ sudo docker inspect 5b2e4703e6d0 | grep -A 5 Binds "Binds": [ "/var/lib/config-data/puppet-generated/neutron/etc/neutron:/etc/neutron:ro", "/run/netns:/run/netns:shared", "/var/lib/neutron:/var/lib/neutron", "/dev/log:/dev/log" ], [heat-admin@controller-0 ~]$ sudo docker exec -t 5b2e4703e6d0 ls /dev/log /dev/log $
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1672