Bug 1386410 - 'No calling threads waiting for msg_id :’ errors seen on compute node
Summary: 'No calling threads waiting for msg_id :’ errors seen on compute node
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-oslo-messaging
Version: 6.0 (Juno)
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: async
: 6.0 (Juno)
Assignee: Victor Stinner
QA Contact: Udi Shkalim
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-18 21:03 UTC by nalmond
Modified: 2022-08-16 13:54 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-09 14:18:57 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-4576 0 None None None 2022-08-16 13:54:45 UTC

Description nalmond 2016-10-18 21:03:42 UTC
Description of problem:
We see some errors like this on the compute node: 'No calling threads waiting for msg_id :’ In each of those messages we see all the ports being responded to, but they are not getting delivered to the neutron agent. There was a bug fix upstream for oslo.messaging in Kilo that was supposed address this: https://github.com/openstack/oslo.messaging/commit/7bce31a2d14de29f1fc306938c7085f5205c9110#diff-9e692a136baf4fc701a9868fa789b442R102

Can we backport this patch to RHOSP 6?

Actual results:
Error is seen and messages aren't being delivered

Expected results:
Error does not appear and messages are delivered

Additional info:
In use are a set of Cisco ML2 plugins, and a neutron-opflex-agent on the compute nodes. There is a current outage with this agent.

Comment 4 John Eckersberg 2016-10-19 14:35:12 UTC
It looks to me that everything is working correctly, it's just very slow and timing out as it's configured to.  For example, I see this error:

Oct 18 17:31:50 phllsu320 31706: ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] process_network_ports - iteration:7 - failure while retrieving port details from server
31706: TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last):
31706: TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1228, in process_network_ports
31706: TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     devices_added_updated, ovs_restarted)
31706: TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/opflexagent/gbp_ovs_agent.py", line 568, in treat_devices_added_or_updated
31706: TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     raise ovs.DeviceListRetrievalError(devices=devices, error=e)
31706: TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent DeviceListRetrievalError: Unable to retrieve port details for devices: set(...) because of error: Timed out waiting for a reply to message ID 4f95af1ed11b46c582af5c52b5bfd52d.

So at this point, the operation timed out, and the thread is no longer waiting for a reply to 4f95af1ed11b46c582af5c52b5bfd52d.

Then later we see:

Oct 18 17:40:37 phllsu320 31706: WARNING oslo.messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 4f95af1ed11b46c582af5c52b5bfd52d, message : {u'_unique_id': u'784aac09ee504738a743850fcb77c28
d', u'failure': None, u'result': None, u'ending': True}

So the reply does eventually arrive, but it's almost *9 minutes* too late.  The real question is, why is this operation taking about 10 minutes (9 minutes + 60s rpc_response_timeout)?

Comment 9 Victor Stinner 2016-12-07 10:39:46 UTC
nalmond: "As The case was closed due to inactivity from the customer." So I close the issue.


Note You need to log in before you can comment on or make changes to this bug.