Bug 711471

Summary: Cluster-durable queues does not get durable (when cluster reduced to single node, which is restarted)
Product: Red Hat Enterprise MRG Reporter: Petra Svobodová <psvobodo>
Component: qpid-cppAssignee: messaging-bugs <messaging-bugs>
Status: CLOSED NOTABUG QA Contact: MRG Quality Engineering <mrgqe-bugs>
Severity: high Docs Contact:
Priority: unspecified    
Version: DevelopmentCC: freznice, iboverma
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-06-08 13:36:23 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Petra Svobodová 2011-06-07 15:06:43 UTC
Description of problem:
A queue declared with "cluster durable" option is not durable neither  if there is only one functioning node in the cluster.


Version-Release number of selected component (if applicable):
ruby-qpid-qmf-0.10-10.el5
qpid-cpp-client-ssl-0.10-8.el5
qpid-tests-0.7.946106-1.el5
python-qpid-0.10-1.el5
qpid-tools-0.10-6.el5
qpid-cpp-server-xml-0.10-8.el5
ruby-qpid-0.7.946106-2.el5
qpid-java-example-0.10-6.el5
python-qpid-qmf-0.10-10.el5
qpid-cpp-server-0.10-8.el5
qpid-cpp-server-ssl-0.10-8.el5
qpid-cpp-server-devel-0.10-8.el5
qpid-java-client-0.10-6.el5
qpid-qmf-devel-0.10-10.el5
qpid-cpp-client-0.10-8.el5
qpid-cpp-client-devel-0.10-8.el5
qpid-cpp-server-store-0.10-8.el5
qpid-java-common-0.10-6.el5
qpid-cpp-client-devel-docs-0.10-8.el5
qpid-cpp-server-cluster-0.10-8.el5
qpid-qmf-0.10-10.el5
qpid-cpp-client-rdma-0.10-8.el5
qpid-cpp-server-rdma-0.10-8.el5


How reproducible:
100%

Steps to Reproduce:
1. Connect two brokers to a cluster.
2. Create a new queue with the "cluster-durable" option using command "qpid-config add queue cd_q --cluster-durable"
3. Stop the qpidd service on one cluster node (there is only one running cluster node on the cluster).
4. Restart the qpidd service on the running cluster node.
5. Display a list of queues.
  
Actual results:
The new cluster-durable queue is removed.

Expected results:
The new cluster-durable queue should be not removed; it should be durable now.

Additional info:
See terminal transcript below:

[root@dhcp-37-202 ~]# qpid-cluster
  Cluster Name: clu
Cluster Status: ACTIVE
  Cluster Size: 2
       Members: ID=192.168.5.1:10336 URL=amqp:tcp:10.34.37.202:5672,tcp:192.168.5.1:5672
              : ID=192.168.5.2:7675 URL=amqp:tcp:10.34.37.203:5672,tcp:192.168.5.2:5672
[root@dhcp-37-202 ~]# qpid-config add queue cd_q --cluster-durable
[root@dhcp-37-202 ~]# qpid-config queues
Queue Name                                             Attributes
===========================================================================
cd_q                                                   --cluster-durable 
qmfagent-45bf8e9c-9260-4de7-affb-15c0c210429b          auto-del excl 
qmfc-v2-dhcp-37-202.lab.eng.brq.redhat.com.10452.1     auto-del excl 
qmfc-v2-hb-dhcp-37-202.lab.eng.brq.redhat.com.10452.1  auto-del excl --limit-policy=ring 
qmfc-v2-ui-dhcp-37-202.lab.eng.brq.redhat.com.10452.1  auto-del excl --limit-policy=ring 
reply-dhcp-37-202.lab.eng.brq.redhat.com.10452.1       auto-del excl 
topic-dhcp-37-202.lab.eng.brq.redhat.com.10452.1       auto-del excl --limit-policy=ring 
[root@dhcp-37-202 ~]# qpid-cluster
  Cluster Name: clu
Cluster Status: ACTIVE
  Cluster Size: 1
       Members: ID=192.168.5.1:10336 URL=amqp:tcp:10.34.37.202:5672,tcp:192.168.5.1:5672
[root@dhcp-37-202 ~]# qpid-config queues
Queue Name                                             Attributes
===========================================================================
cd_q                                                   --cluster-durable 
qmfagent-45bf8e9c-9260-4de7-affb-15c0c210429b          auto-del excl 
qmfc-v2-dhcp-37-202.lab.eng.brq.redhat.com.10472.1     auto-del excl 
qmfc-v2-hb-dhcp-37-202.lab.eng.brq.redhat.com.10472.1  auto-del excl --limit-policy=ring                       
qmfc-v2-ui-dhcp-37-202.lab.eng.brq.redhat.com.10472.1  auto-del excl --limit-policy=ring                       
reply-dhcp-37-202.lab.eng.brq.redhat.com.10472.1       auto-del excl 
topic-dhcp-37-202.lab.eng.brq.redhat.com.10472.1       auto-del excl --limit-policy=ring 
[root@dhcp-37-202 ~]# service qpidd restart
Stopping Qpid AMQP daemon:                                 [  OK  ]
Starting Qpid AMQP daemon:                                 [  OK  ]
[root@dhcp-37-202 ~]# qpid-config queues
Queue Name                                             Attributes
===========================================================================
qmfagent-45bf8e9c-9260-4de7-affb-15c0c210429b          auto-del excl 
qmfc-v2-dhcp-37-202.lab.eng.brq.redhat.com.10536.1     auto-del excl 
qmfc-v2-hb-dhcp-37-202.lab.eng.brq.redhat.com.10536.1  auto-del excl --limit-policy=ring 
qmfc-v2-ui-dhcp-37-202.lab.eng.brq.redhat.com.10536.1  auto-del excl --limit-policy=ring 
reply-dhcp-37-202.lab.eng.brq.redhat.com.10536.1       auto-del excl 
topic-dhcp-37-202.lab.eng.brq.redhat.com.10536.1       auto-del excl --limit-policy=ring

Comment 1 Ted Ross 2011-06-07 19:25:35 UTC
It appears that this bug is due to a misunderstanding of how cluster-durable works (which is no doubt due to the fact that this feature has no documentation).

The following email thread from qpid-users should shed some light on the subject:

http://apache-qpid-users.2158936.n2.nabble.com/Qpid-cluster-durable-queues-td5988904.html

Comment 2 Petra Svobodová 2011-06-08 13:36:23 UTC
Hi Ted,

thank you for your explanation. Cluster-durable queues behaves according to the description from the link. I will close this bug and create a documentation bug.

--> CLOSED