Red Hat Bugzilla – Bug 498221
when joining cluster, new nodes create all queues as durable
Last modified: 2009-06-12 13:39:18 EDT
Description of problem:
New members of a cluster convert all queues whose definitions they receive via the state update to durable queues.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. start first cluster node
2. create non durable queue (e.g. qpid-config -a host1:port1 add queue non-durable)
3. check queue status (e.g. qpid-config -a host1:port1 queues), queue is non-durable
4. start new node for same cluster
5. check queue status on new node (e.g. qpid-config -a host2:port2 queues)
Queue on second node is now durable.
Queue should be non-durable on both nodes.
If you have a node joining when clients are active, the failover exchange subscription queue (and indeed any temporary queues that are defined) are replicated to the new node as durable queues which then remain until manually deleted.
Reproduced on RHEL5.3 i386.
Related packages (mrg-devel repo):
Waiting for new packages to verify.
This is a result of the encode/decode logic in qpid/broker/Queue.cpp not being updated to include flags (previously it was only used for durable queues so the durable and exclusive flags were implied).
Created attachment 345616 [details]
Fix (created against r752581)
Fixed on trunk by r779183; attached as a patch against r752581).
Fixed in qpidd-0.5.752581-8
Using 'qpid-config' from package
python-qpid-0.5.752581-1.el5 to trigger the bug.
Reproducible on MRG 1.1.1 (mrg-stable):
I was about to try verifying it on:
But I can not continue until newly discovored
bug 503025 is fixed.
Verified on -9, RHEL5.3 i386.
I will test x86_64 soon.
Verified on qpidd-0.5.752581-10.el5 RHEL 5.3 x86_64
Verified again on RHEL5.3 i386 but qpidd-0.5.752581-10.el5 series.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.