Bug 2123273
| Summary: | Low probability metadata+connection failure | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Attila Fazekas <afazekas> |
| Component: | openstack-neutron | Assignee: | Elvira <egarciar> |
| Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | Eran Kuris <ekuris> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 17.1 (Wallaby) | CC: | chrisw, scohen, twilson |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-07-28 16:05:21 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Attila Fazekas
2022-09-01 09:25:28 UTC
Hi, from the logs I can see that the requests arrive to the nova metadata API as expected and arrive back to the neutron-metadata-server, but it seems like the request never arrive to the VM itself, because this sequence repeated until timeout: In compute-0/var/log/containers/neutron/ovn-metadata-agent.log: 2022-08-17 13:42:04.456 26777 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/user-data HTTP/1.0 Accept: */* Connection: close Content-Type: text/plain Host: 169.254.169.254 User-Agent: curl/7.64.1 X-Forwarded-For: 10.100.0.7 X-Ovn-Network-Id: ee6d7961-8d65-463e-8ac8-c62df9ca0f65 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:84 2022-08-17 13:42:04.592 26777 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:162 2022-08-17 13:42:04.597 26777 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "GET /2009-04-04/user-data HTTP/1.1" status: 200 len: 231 time: 0.1411240 2022-08-17 13:42:04.603 26776 DEBUG eventlet.wsgi.server [-] (26776) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:992 2022-08-17 13:42:04.604 26776 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping HTTP/1.0 The reason why this is happening is either because the neutron metadata-agent cannot find the way back to the VM (which doesn't make much sense since it does get the petitions), or that there is something within the VM itself when processing the metadata. I think it would be useful to have a live environment for this, if possible. I can create similar deployment, but unlikely I can save one where the issue happened. I also can show how to run tempest-stress with the same test on multiple threads. Hi Attila! I'm interested in learning how to run tempest-stress with the same test on multiple threads as you said so that I can reproduce it. I sent you an email related to it. Is there any new run where you could see this happen again? |