It was discovered that the 'ring' policy and a 'last value queue' queue type were implemented as incompatible options. Selecting the ring policy meant the last value queue type request was effectively ignored, and messages with a given key did not replace older messages on the queue with the same key. Special handling is now added, which now correctly handles the two different behaviours on the same queue. A ring policy on a last value queue now behaves as expected. Messages with the same key replace each other on the queue, and the ring policy now correctly limits the maximum depth of the queue.
Description of problem:
It is not possible to have LVQ queue with ring policy. As using ring policy disables the LVQ feature (as all messages with the same LVQ key are stored in the queue).
Despite LVQ ordering somehow limits queue depth due to limited number of unique LVQ key values, there are still use cases where ring+LVQ makes sense.
Version-Release number of selected component (if applicable):
qpid-cpp-0.22-50
How reproducible:
100%
Steps to Reproduce:
address="LVQtest; {create:always, node:{type:queue, x-declare:{ arguments:{'qpid.flow_stop_count':0, 'qpid.max_count':100, 'qpid.last_value_queue':True, 'qpid.last_value_queue_key':'qpid.LVQ_key', 'qpid.policy_type':'ring', 'qpid.flow_resume_count':0, 'qpid.flow_stop_size':0, 'qpid.flow_resume_size':0}}}}"
qpid-send -a "$address" --group-key=key --group-interleave=2 --group-size=3 -m 101
qpid-stat -q LVQtest
Actual results:
queue-depth is 100
Expected results:
queue-depth should be 1
Additional info:
This regression comes from qpid refactor in QPID-4178, see src.code:
boost::shared_ptr<Queue> QueueFactory::create(const std::string& name, const QueueSettings& settings)
{
settings.validate();
//1. determine Queue type (i.e. whether we are subclassing Queue)
// -> if 'ring' policy is in use then subclass
boost::shared_ptr<Queue> queue;
if (settings.dropMessagesAtLimit) {
queue = boost::shared_ptr<Queue>(new LossyQueue(name, settings, settings.durable ? store : 0, parent, broker));
} else if (settings.lvqKey.size()) {
std::auto_ptr<MessageMap> map(new MessageMap(settings.lvqKey));
queue = boost::shared_ptr<Queue>(new Lvq(name, map, settings, settings.durable ? store : 0, parent, broker));
} else {
queue = boost::shared_ptr<Queue>(new Queue(name, settings, settings.durable ? store : 0, parent, broker));
}
Here, settings.dropMessagesAtLimit is true just for ring queue.
In case the change is intentional / lvq+ring is not supposed to be working, broker should deny such queue create request, to prevent false positive user experience.
Fixed upstream by: https://svn.apache.org/r1650196
(Note: the ring queue discard if required happens before enqueue, the lvq replacement after it, so where the number of unique keys is less than the ring size, a new message may cause an old message to be discarded even though the new message would in fact replace some other message). However this is considered to be acceptable since both behaviours are arguably honoured, even if the ring discard is not strictly logically necessary)
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://rhn.redhat.com/errata/RHEA-2015-0805.html
Description of problem: It is not possible to have LVQ queue with ring policy. As using ring policy disables the LVQ feature (as all messages with the same LVQ key are stored in the queue). Despite LVQ ordering somehow limits queue depth due to limited number of unique LVQ key values, there are still use cases where ring+LVQ makes sense. Version-Release number of selected component (if applicable): qpid-cpp-0.22-50 How reproducible: 100% Steps to Reproduce: address="LVQtest; {create:always, node:{type:queue, x-declare:{ arguments:{'qpid.flow_stop_count':0, 'qpid.max_count':100, 'qpid.last_value_queue':True, 'qpid.last_value_queue_key':'qpid.LVQ_key', 'qpid.policy_type':'ring', 'qpid.flow_resume_count':0, 'qpid.flow_stop_size':0, 'qpid.flow_resume_size':0}}}}" qpid-send -a "$address" --group-key=key --group-interleave=2 --group-size=3 -m 101 qpid-stat -q LVQtest Actual results: queue-depth is 100 Expected results: queue-depth should be 1 Additional info: This regression comes from qpid refactor in QPID-4178, see src.code: boost::shared_ptr<Queue> QueueFactory::create(const std::string& name, const QueueSettings& settings) { settings.validate(); //1. determine Queue type (i.e. whether we are subclassing Queue) // -> if 'ring' policy is in use then subclass boost::shared_ptr<Queue> queue; if (settings.dropMessagesAtLimit) { queue = boost::shared_ptr<Queue>(new LossyQueue(name, settings, settings.durable ? store : 0, parent, broker)); } else if (settings.lvqKey.size()) { std::auto_ptr<MessageMap> map(new MessageMap(settings.lvqKey)); queue = boost::shared_ptr<Queue>(new Lvq(name, map, settings, settings.durable ? store : 0, parent, broker)); } else { queue = boost::shared_ptr<Queue>(new Queue(name, settings, settings.durable ? store : 0, parent, broker)); } Here, settings.dropMessagesAtLimit is true just for ring queue.