Bug 473088 - Cluster does not handle flow-to-disk correctly.
Summary: Cluster does not handle flow-to-disk correctly.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise MRG
Classification: Red Hat
Component: qpid-cpp
Version: beta
Hardware: All
OS: Linux
high
high
Target Milestone: 1.1.1
: ---
Assignee: Carl Trieloff
QA Contact: Kim van der Riet
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2008-11-26 15:06 UTC by Alan Conway
Modified: 2009-04-21 16:17 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2009-04-21 16:17:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Fix null message store so it supports flow-to-disk (in memory) (3.15 KB, application/octet-stream)
2008-11-26 15:06 UTC, Alan Conway
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2009:0434 0 normal SHIPPED_LIVE Red Hat Enterprise MRG Messaging and Grid Version 1.1.1 2009-04-21 16:15:50 UTC

Description Alan Conway 2008-11-26 15:06:42 UTC
Created attachment 324727 [details]
Fix null message store so it supports flow-to-disk (in memory)

Description of problem:

A broker with persistent store may flow message content to disk for large messages or if the queue exceeds a policy limit. It keeps only the message transfer frame in memory. 

Currently the cluster does not correctly replicate such messages to new members.

>> How to fix: Can I use Message::sendContent() in DumpClient to get the
>> content from disk? Does it leave the message in store or do I need to
>> re-release the content? 

It would if the content was stored on disk. For NullMessageStore that 
won't be the case (and we should prevent it being released under those 
circumstances). In a standard install however the real store will be in 
place.
[[Attached patch for NullMessageStore]]

>> Or is there another function (should I write
>> one) that will pull the content back in. I'd rather have a FrameSet than
>> have the frames sent to a handler since I want to send this thru the
>> client stack.

We can also do that. We do need to be care of concurrency implications 
though as a Message may be on more than one queue.

Comment 1 Gordon Sim 2009-01-19 19:15:54 UTC
NullMessageStore has been fixed to handle this case. However cluster still needs to call Message::sendContent when replicating messages that have been flowed to disk (or at least disable flowing to disk).

Comment 2 Carl Trieloff 2009-01-19 22:05:10 UTC
Can't disable, need to add the call to Message::sendContent

Comment 3 Carl Trieloff 2009-01-28 22:21:09 UTC
testing

run one broker

./qpidd --auth no --staging-threshold 1000 --load-module .libs/cluster.so --load-module ~/mrg/trunk/cpp/lib/.libs/msgstore.so --store-dir /tmp/b1/ --cluster-name testCluster 

declare a queue

./qpid-config add queue publish-consume --durable

./publish --size 2000 --count 10

./qpidd --auth no --staging-threshold 1000 --load-module .libs/cluster.so --load-module ~/mrg/trunk/cpp/lib/.libs/msgstore.so --store-dir /tmp/b2/ --cluster-name testCluster --port 5673 --data-dir /tmp/b2


need to expand the test sequence to test the data on the second node.

Comment 4 Carl Trieloff 2009-01-30 18:37:09 UTC
then run


./consume --print-data --log-enable info+
2009-jan-30 13:31:49 info Connecting to tcp:localhost:5672
2009-jan-30 13:31:49 info Known-brokers update: amqp:tcp:10.16.13.40:5672,tcp:10.16.19.90:5672, amqp:tcp:10.16.13.40:5673,tcp:10.16.19.90:5673
2009-jan-30 13:31:49 info Received: 
2009-jan-30 13:31:49 info Data: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX


can check that the message body is present -- the XXX. if that does not print, then the message content was dropped.

Comment 5 Carl Trieloff 2009-01-30 18:59:54 UTC
Fix Committed revision 739378.

Comment 7 Frantisek Reznicek 2009-03-03 13:22:16 UTC
The issue has been fixed, validated on RHEL 5.3 i386 / x86_64 cluster on packages:
qpidd-0.4.744917-1.el5, python-qpid-0.4.743856-1.el5.

Validated manually and automatically (qpid_test_mnode_cluster).

->VERIFIED

Comment 9 errata-xmlrpc 2009-04-21 16:17:53 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHEA-2009-0434.html


Note You need to log in before you can comment on or make changes to this bug.