Bug 473088 - Cluster does not handle flow-to-disk correctly.
Cluster does not handle flow-to-disk correctly.
Product: Red Hat Enterprise MRG
Classification: Red Hat
Component: qpid-cpp (Show other bugs)
All Linux
high Severity high
: 1.1.1
: ---
Assigned To: Carl Trieloff
Kim van der Riet
Depends On:
  Show dependency treegraph
Reported: 2008-11-26 10:06 EST by Alan Conway
Modified: 2009-04-21 12:17 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2009-04-21 12:17:53 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
Fix null message store so it supports flow-to-disk (in memory) (3.15 KB, application/octet-stream)
2008-11-26 10:06 EST, Alan Conway
no flags Details

  None (edit)
Description Alan Conway 2008-11-26 10:06:42 EST
Created attachment 324727 [details]
Fix null message store so it supports flow-to-disk (in memory)

Description of problem:

A broker with persistent store may flow message content to disk for large messages or if the queue exceeds a policy limit. It keeps only the message transfer frame in memory. 

Currently the cluster does not correctly replicate such messages to new members.

>> How to fix: Can I use Message::sendContent() in DumpClient to get the
>> content from disk? Does it leave the message in store or do I need to
>> re-release the content? 

It would if the content was stored on disk. For NullMessageStore that 
won't be the case (and we should prevent it being released under those 
circumstances). In a standard install however the real store will be in 
[[Attached patch for NullMessageStore]]

>> Or is there another function (should I write
>> one) that will pull the content back in. I'd rather have a FrameSet than
>> have the frames sent to a handler since I want to send this thru the
>> client stack.

We can also do that. We do need to be care of concurrency implications 
though as a Message may be on more than one queue.
Comment 1 Gordon Sim 2009-01-19 14:15:54 EST
NullMessageStore has been fixed to handle this case. However cluster still needs to call Message::sendContent when replicating messages that have been flowed to disk (or at least disable flowing to disk).
Comment 2 Carl Trieloff 2009-01-19 17:05:10 EST
Can't disable, need to add the call to Message::sendContent
Comment 3 Carl Trieloff 2009-01-28 17:21:09 EST

run one broker

./qpidd --auth no --staging-threshold 1000 --load-module .libs/cluster.so --load-module ~/mrg/trunk/cpp/lib/.libs/msgstore.so --store-dir /tmp/b1/ --cluster-name testCluster 

declare a queue

./qpid-config add queue publish-consume --durable

./publish --size 2000 --count 10

./qpidd --auth no --staging-threshold 1000 --load-module .libs/cluster.so --load-module ~/mrg/trunk/cpp/lib/.libs/msgstore.so --store-dir /tmp/b2/ --cluster-name testCluster --port 5673 --data-dir /tmp/b2

need to expand the test sequence to test the data on the second node.
Comment 4 Carl Trieloff 2009-01-30 13:37:09 EST
then run

./consume --print-data --log-enable info+
2009-jan-30 13:31:49 info Connecting to tcp:localhost:5672
2009-jan-30 13:31:49 info Known-brokers update: amqp:tcp:,tcp:, amqp:tcp:,tcp:
2009-jan-30 13:31:49 info Received: 

can check that the message body is present -- the XXX. if that does not print, then the message content was dropped.
Comment 5 Carl Trieloff 2009-01-30 13:59:54 EST
Fix Committed revision 739378.
Comment 7 Frantisek Reznicek 2009-03-03 08:22:16 EST
The issue has been fixed, validated on RHEL 5.3 i386 / x86_64 cluster on packages:
qpidd-0.4.744917-1.el5, python-qpid-0.4.743856-1.el5.

Validated manually and automatically (qpid_test_mnode_cluster).

Comment 9 errata-xmlrpc 2009-04-21 12:17:53 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.