Bug 1386410

Summary: 'No calling threads waiting for msg_id :’ errors seen on compute node
Product: Red Hat OpenStack Reporter: nalmond
Component: python-oslo-messagingAssignee: Victor Stinner <vstinner>
Status: CLOSED WORKSFORME QA Contact: Udi Shkalim <ushkalim>
Severity: high Docs Contact:
Priority: high    
Version: 6.0 (Juno)CC: apevec, fdinitto, fpercoco, jeckersb, lhh, nalmond, sbandyop, srevivo, vstinner
Target Milestone: asyncKeywords: Reopened, Triaged, ZStream
Target Release: 6.0 (Juno)   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-09 14:18:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description nalmond 2016-10-18 21:03:42 UTC
Description of problem:
We see some errors like this on the compute node: 'No calling threads waiting for msg_id :’ In each of those messages we see all the ports being responded to, but they are not getting delivered to the neutron agent. There was a bug fix upstream for oslo.messaging in Kilo that was supposed address this: https://github.com/openstack/oslo.messaging/commit/7bce31a2d14de29f1fc306938c7085f5205c9110#diff-9e692a136baf4fc701a9868fa789b442R102

Can we backport this patch to RHOSP 6?

Actual results:
Error is seen and messages aren't being delivered

Expected results:
Error does not appear and messages are delivered

Additional info:
In use are a set of Cisco ML2 plugins, and a neutron-opflex-agent on the compute nodes. There is a current outage with this agent.

Comment 4 John Eckersberg 2016-10-19 14:35:12 UTC
It looks to me that everything is working correctly, it's just very slow and timing out as it's configured to.  For example, I see this error:

Oct 18 17:31:50 phllsu320 31706: ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] process_network_ports - iteration:7 - failure while retrieving port details from server
31706: TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last):
31706: TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1228, in process_network_ports
31706: TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     devices_added_updated, ovs_restarted)
31706: TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/opflexagent/gbp_ovs_agent.py", line 568, in treat_devices_added_or_updated
31706: TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     raise ovs.DeviceListRetrievalError(devices=devices, error=e)
31706: TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent DeviceListRetrievalError: Unable to retrieve port details for devices: set(...) because of error: Timed out waiting for a reply to message ID 4f95af1ed11b46c582af5c52b5bfd52d.

So at this point, the operation timed out, and the thread is no longer waiting for a reply to 4f95af1ed11b46c582af5c52b5bfd52d.

Then later we see:

Oct 18 17:40:37 phllsu320 31706: WARNING oslo.messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 4f95af1ed11b46c582af5c52b5bfd52d, message : {u'_unique_id': u'784aac09ee504738a743850fcb77c28
d', u'failure': None, u'result': None, u'ending': True}

So the reply does eventually arrive, but it's almost *9 minutes* too late.  The real question is, why is this operation taking about 10 minutes (9 minutes + 60s rpc_response_timeout)?

Comment 9 Victor Stinner 2016-12-07 10:39:46 UTC
nalmond: "As The case was closed due to inactivity from the customer." So I close the issue.