| Summary: | [oslo] With QPID, RPC calls to a topic are always fanned-out to all subscribers. | |||
|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Perry Myers <pmyers> | |
| Component: | openstack-neutron | Assignee: | Ihar Hrachyshka <ihrachys> | |
| Status: | CLOSED ERRATA | QA Contact: | Ofer Blaut <oblaut> | |
| Severity: | high | Docs Contact: | ||
| Priority: | high | |||
| Version: | unspecified | CC: | apevec, breeler, chrisw, dallan, fpercoco, hateya, kgiusti, lpeer, mlopes, ndipanov, oblaut, twilson, yeylon | |
| Target Milestone: | async | Keywords: | TestOnly | |
| Target Release: | 4.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | openstack-neutron-2013.2-14.el6ost | Doc Type: | Bug Fix | |
| Doc Text: |
Prior to this update, QPID topic consumer re-connection logic (under the v2 topology) incorrectly resulted in the creation of duplicate RPC notifications delivered to every subscribed consumer. Consequently, samples derived from RPC notifications were duplicated to the extent that the collector service made multiple subscriptions to the topic control exchanges for individual services (e.g. Compute).
With this release, QPID creates a single queue per topic and shares it among all corresponding consumers. This ensures that each RPC notification is only received by a single consumer, and prevents any unnecessary duplication of samples.
|
Story Points: | --- | |
| Clone Of: | 1038641 | |||
| : | 1045067 (view as bug list) | Environment: | ||
| Last Closed: | 2013-12-20 00:42:58 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Bug Depends On: | ||||
| Bug Blocks: | 1045067 | |||
|
Description
Perry Myers
2013-12-05 16:39:05 UTC
Copied doc-text from oslo issue where the bug was ultimately fixed. Not sure if this should really be requires_doc_text- or not. Ofer, it's not clear how to verify the bug. It seems it was originally detected against oslo library, as per comment https://bugs.launchpad.net/oslo/+bug/1178375/comments/26. Then it was fixed there, and the fix was backported to multiple modules which have the failing code copy-pasted into their source tree. This bug is for openstack-neutron package, meaning it should be verified against it, not against oslo.messaging library that is used in the comment referred above. And there are no clear steps on how to verify it against Neutron. BTW I've checked whether the steps in the comment do not result in incorrect behaviour anymore, and I still see the issue (duplicate messages when using topology=2). We could try to extrapolate the (still incorrect) observed behaviour to openstack-neutron and conclude that the fix didn't fix the issue, but that does not seem strictly correct. Can you elaborate on how to properly verify the bug? === For your reference, putting my steps to reproduce the upstream fix. 1. Install RHEL 6.5. 2. Install RHOS-4.0. 3. Install pip (f.e. easy_install pip). 4. git clone https://github.com/openstack/oslo.messaging.git (needed for testing clients/servers from below). 5. cd oslo.messaging && pip -r requirements.txt && python ./setup.py install 6. git clone https://github.com/kgiusti/oslo-messaging-clients.git 7. run two servers with topology=2, send a message to servers -> got duplicate delivery. (More details on step 7 at: https://bugs.launchpad.net/oslo/+bug/1178375/comments/26) === (I'm new to Qpid and Openstack in general, so read my comment with caution.) I guess we may set multi-neutron setup (how?) and check that a new dhcp_agent is registered only by one of those neutron servers (meaning, new agent notification goes in round robin). Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2013-1859.html |