Bug 509803 - Recovered cluster-durable messages are never dequeued from store
Recovered cluster-durable messages are never dequeued from store
Status: CLOSED ERRATA
Product: Red Hat Enterprise MRG
Classification: Red Hat
Component: qpid-cpp (Show other bugs)
1.1.1
All Linux
high Severity high
: 1.1.6
: ---
Assigned To: Gordon Sim
Jiri Kolar
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2009-07-06 06:40 EDT by Gordon Sim
Modified: 2009-07-14 13:32 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-07-14 13:32:24 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
patch for issue (1.19 KB, patch)
2009-07-06 15:37 EDT, Carl Trieloff
no flags Details | Diff

  None (edit)
Description Gordon Sim 2009-07-06 06:40:09 EDT
Description of problem:

If the cluster membership drops to one and a queue with the cluster-durable option specified has its messages persisted, if that node is then stopped and restarted, and messages are consumed, they are never dequeued from the store.

Version-Release number of selected component (if applicable):

qpidd-0.5.752581-22.el5
rhm-0.5.3206-5.el5

How reproducible:

100%

Steps to Reproduce:
1. create two node cluster
2. create queue with cluster-durable option selected
  e.g. qpid-config add queue test-queue --durable --cluster-durable
3. add some transient messages to that queue
  e.g. for i in `seq 1 10`; do echo "Message$i"; done | sender --send-eos 1
4. kill one node of cluster
5. stop and recover the other node
6. consume some messages from the queue
  e.g. receiver --messages 5
7. stop and recover the node again
8. check that consumed messages are no longer on the queue
  e.g. receiver --browse
  
Actual results:

Consumed messages are still on the queue.

Expected results:

Consumed messages are not still on the queue.
Comment 1 Carl Trieloff 2009-07-06 15:37:22 EDT
Created attachment 350667 [details]
patch for issue

Note that this patch should correct the issue, but needs testing, I don't
currently have access to an env to reproduce and validate the fix
Comment 2 Gordon Sim 2009-07-07 04:52:51 EDT
Attached patch (with id=350667) fixes issue.
Comment 3 Carl Trieloff 2009-07-07 12:00:41 EDT
patch committed to trunk
Committed revision 791886.
Comment 4 Gordon Sim 2009-07-09 03:03:24 EDT
Fixed in qpidd-0.5.752581-25.el5
Comment 5 Jiri Kolar 2009-07-09 06:10:22 EDT
Tested:
on -22 bug appears
on -25 has been fixed

validated on RHEL  5.3 i386 / x86_64 

packages:

# rpm -qa | grep -E '(qpid|openais|rhm)' | sort -u

openais-0.80.3-22.el5_3.8
openais-debuginfo-0.80.3-22.el5_3.8
openais-devel-0.80.3-22.el5_3.8
python-qpid-0.5.752581-3.el5
qpidc-0.5.752581-25.el5
qpidc-debuginfo-0.5.752581-22.el5
qpidc-devel-0.5.752581-25.el5
qpidc-perftest-0.5.752581-25.el5
qpidc-rdma-0.5.752581-25.el5
qpidc-ssl-0.5.752581-25.el5
qpidd-0.5.752581-25.el5
qpidd-acl-0.5.752581-25.el5
qpidd-cluster-0.5.752581-25.el5
qpidd-devel-0.5.752581-25.el5
qpid-dotnet-0.4.738274-2.el5
qpidd-rdma-0.5.752581-25.el5
qpidd-ssl-0.5.752581-25.el5
qpidd-xml-0.5.752581-25.el5
qpid-java-client-0.5.751061-8.el5
qpid-java-common-0.5.751061-8.el5
rhm-0.5.3206-6.el5
rhm-docs-0.5.756148-1.el5

->VERIFIED
Comment 7 errata-xmlrpc 2009-07-14 13:32:24 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2009-1153.html

Note You need to log in before you can comment on or make changes to this bug.