Bug 559014 - clustered qpid: durable exchange state not replicated to broker joining cluster
Summary: clustered qpid: durable exchange state not replicated to broker joining cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise MRG
Classification: Red Hat
Component: qpid-cpp
Version: 1.2
Hardware: All
OS: Linux
high
high
Target Milestone: 1.3
: ---
Assignee: Kim van der Riet
QA Contact: Jiri Kolar
URL:
Whiteboard:
Depends On:
Blocks: 601230
TreeView+ depends on / blocked
 
Reported: 2010-01-26 21:58 UTC by Mike Cressman
Modified: 2018-11-14 20:24 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
When a node was added to a cluster after a durable exchange had been defined, the new node lost its durable status for those exchanges. As a result, if the affected node was the first node to be recovered in the cluster, then the durable exchanges would have been lost. With this update, exchange durability is passed on to new nodes joining the cluster, with the result that durable exchanges can no longer potentially be lost.
Clone Of:
: 601230 (view as bug list)
Environment:
Last Closed: 2010-10-14 16:08:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2010:0773 0 normal SHIPPED_LIVE Moderate: Red Hat Enterprise MRG Messaging and Grid Version 1.3 2010-10-14 15:56:44 UTC

Description Mike Cressman 2010-01-26 21:58:31 UTC
Description of problem:
If a durable exchange exists in a cluster, and a broker joins the cluster, the durable exchange is not listed for the new broker.


Version-Release number of selected component (if applicable):
Fails in latest 1.2 release as well as the latest upstream build (pre-1.3).

How reproducible:
100%

Steps to Reproduce:
1. Start a broker in a cluster at <address1>
2. Run 'qpid-config -a <address1> add exchange ExDurable direct --durable'
3. Run 'qpid-config -a <address1> add exchange Ex direct'
4. Run 'qpid-config -a <address1> exchanges' to see both newly created exchanges
5. Start another broker within the cluster at <address2>.
6. Run 'qpid-config -a <address2> exchanges'.

Actual results:
The output shows only exchange 'Ex' and not 'ExDurable'.

Expected results:
Both exchanges should be there.

Additional info:
Durable queues seem to work.
You get the same problem if you start both brokers initially, then create the durable exchange (it appears in both), then shut down one broker and bring it back up (the durable exchange disappears).

Comment 1 Kim van der Riet 2010-01-28 16:55:19 UTC
Fixed in svn r.904154

QE: This is easily tested by the above example. In addition, the same test for brokers without the store loaded should work. (Since there is no recovery in this test, there should be no difference.)

Comment 2 Jiri Kolar 2010-06-01 15:08:26 UTC
Tested:
on 752581 bug appears
on 946106 does not. It has been fixed

validated on RHEL  5.5 i386 / x86_64 not on RHEL4 because of no clustering

packages:

# rpm -qa | grep -E '(qpid|openais|rhm)' | sort -u

openais-0.80.6-16.el5_5.1
openais-debuginfo-0.80.6-16.el5_5.1
python-qpid-0.7.946106-1.el5
qpid-cpp-client-0.7.946106-1.el5
qpid-cpp-client-devel-0.7.946106-1.el5
qpid-cpp-client-devel-docs-0.7.946106-1.el5
qpid-cpp-client-ssl-0.7.946106-1.el5
qpid-cpp-mrg-debuginfo-0.7.935473-1.el5
qpid-cpp-server-0.7.946106-1.el5
qpid-cpp-server-cluster-0.7.946106-1.el5
qpid-cpp-server-devel-0.7.946106-1.el5
qpid-cpp-server-ssl-0.7.946106-1.el5
qpid-cpp-server-store-0.7.946106-1.el5
qpid-cpp-server-xml-0.7.946106-1.el5
qpid-java-client-0.7.946106-3.el5
qpid-java-common-0.7.946106-3.el5
qpid-tools-0.7.946106-4.el5
rhm-docs-0.7.946106-1.el5

->VERIFIED

Comment 3 Kim van der Riet 2010-10-05 15:59:03 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Cause: Adding a node to a cluster after durable exchanges have been defined result in the new node losing the durable status for those exchanges.

Consequence: If the affected node is the first to be recovered in the cluster, the durable exchanges will be missing and will be lost.

Fix: The durability of the exchanges is passed on to new nodes.

Result: Nodes joining a cluster after durable exchanges have been defined are now correctly setting these as durable.

Comment 4 Douglas Silas 2010-10-05 19:43:12 UTC
    Technical note updated. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    Diffed Contents:
@@ -1,7 +1 @@
-Cause: Adding a node to a cluster after durable exchanges have been defined result in the new node losing the durable status for those exchanges.
+When a node was added to a cluster after a durable exchange had been defined, the new node lost its durable status for those exchanges. As a result, if the affected node was the first node to be recovered in the cluster, then the durable exchanges would have been lost. With this update, exchange durability is passed on to new nodes joining the cluster, with the result that durable exchanges can no longer potentially be lost.-
-Consequence: If the affected node is the first to be recovered in the cluster, the durable exchanges will be missing and will be lost.
-
-Fix: The durability of the exchanges is passed on to new nodes.
-
-Result: Nodes joining a cluster after durable exchanges have been defined are now correctly setting these as durable.

Comment 6 errata-xmlrpc 2010-10-14 16:08:22 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2010-0773.html


Note You need to log in before you can comment on or make changes to this bug.