Bug 1045067 - [oslo] With QPID, RPC calls to a topic are always fanned-out to all subscribers.
Summary: [oslo] With QPID, RPC calls to a topic are always fanned-out to all subscribers.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z1
: 4.0
Assignee: Ihar Hrachyshka
QA Contact: Ofer Blaut
URL:
Whiteboard:
Depends On: 1038717
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-19 14:40 UTC by Scott Lewis
Modified: 2022-07-09 06:17 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, a bug in the QPID topic consumer re-connection logic (under the v2 topology) caused qpidd to use a malformed subscriber address after restarting, resulting in RPC requests sent to a topic with multiple servers ending up being incorrectly multicast to all servers. This update removes the special-case reconnect logic that handles UUID addresses, which in turn avoids the incorrect establishment of multiple subscription to the same fanout address. The QPID broker will now simply generate unique queue names automatically when clients reconnect.
Clone Of: 1038717
Environment:
Last Closed: 2014-01-22 18:32:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1257293 0 None None None Never
Red Hat Product Errata RHSA-2014:0091 0 normal SHIPPED_LIVE Moderate: openstack-neutron security, bug fix, and enhancement update 2014-01-22 23:31:15 UTC

Comment 1 Ihar Hrachyshka 2014-01-02 16:20:50 UTC
Ok, I think I've found the proper way to verify the bug against openstack-neutron. Here is what I did:

1. Installed RHEL 6.5, RHOS 4.0.z.
2. Deployed Openstack via packstack.
3. Set 'qpid_topology_version = 2' in neutron.conf.
3. checked available neutron-related topic exchanges: qpid-config exchanges -r | grep neutron
4. built 'drain' utility, started several of its instances to follow one of the topic exchanges: ./drain -f q-plugin &
5. checked that not more than one of those listeners receive topic messages from q-plugin exchange.

===

FYI: message looks as following when delivered to a 'drain' instance:

Message(properties={qpid.subject:topic/neutron/q-plugin, x-amqp-0-10.routing-key:topic/neutron/q-plugin}, content='{oslo.message:{"_context_roles": ["admin"], "_context_read_deleted": "no", "args": {"agent_state": {"agent_state": {"binary": "neutron-dhcp-agent", "topic": "dhcp_agent", "host": "rhel65", "agent_type": "DHCP agent", "configurations": {"subnets": 0, "use_namespaces": true, "dhcp_lease_duration": 120, "dhcp_driver": "neutron.agent.linux.dhcp.Dnsmasq", "networks": 0, "ports": 0}}}, "time": "2014-01-02T16:13:35.501249"}, "namespace": null, "_unique_id": "5c87b1ada1a241c2b7429ed355e4114d", "_context_timestamp": "2014-01-02 16:13:35.501193", "_context_is_admin": true, "version": "1.0", "_context_project_id": null, "_context_tenant_id": null, "_context_user_id": null, "method": "report_state"}, oslo.version:2.0}')

===

The only question that I'm currently trying to clear out is whether setting topology as above is enough to make sure we test against correct QPID topology.

Comment 2 Ihar Hrachyshka 2014-01-02 16:24:39 UTC
Answering to the question:
as per /usr/share/neutron/neutron-dist.conf, that's the correct way to enforce new topology.

Comment 3 Ihar Hrachyshka 2014-01-02 17:26:34 UTC
Based on verification results above, closing the bug as verified. I will also follow up upstream on why testing clients/servers that were used by original reporter produce incorrect behaviour.

Comment 6 errata-xmlrpc 2014-01-22 18:32:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2014-0091.html


Note You need to log in before you can comment on or make changes to this bug.