Bug 1179609 - ring queue disables LVQ
Summary: ring queue disables LVQ
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise MRG
Classification: Red Hat
Component: qpid-cpp
Version: 3.0
Hardware: All
OS: All
medium
medium
Target Milestone: 3.1
: ---
Assignee: Gordon Sim
QA Contact: Matej Lesko
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-01-07 08:17 UTC by Pavel Moravec
Modified: 2019-02-15 13:59 UTC (History)
6 users (show)

Fixed In Version: qpid-cpp-0.30-6
Doc Type: Bug Fix
Doc Text:
It was discovered that the 'ring' policy and a 'last value queue' queue type were implemented as incompatible options. Selecting the ring policy meant the last value queue type request was effectively ignored, and messages with a given key did not replace older messages on the queue with the same key. Special handling is now added, which now correctly handles the two different behaviours on the same queue. A ring policy on a last value queue now behaves as expected. Messages with the same key replace each other on the queue, and the ring policy now correctly limits the maximum depth of the queue.
Clone Of:
Environment:
Last Closed: 2015-04-14 13:48:50 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Apache JIRA QPID-6299 0 None None None Never
Red Hat Product Errata RHEA-2015:0805 0 normal SHIPPED_LIVE Red Hat Enterprise MRG Messaging 3.1 Release 2015-04-14 17:45:54 UTC

Description Pavel Moravec 2015-01-07 08:17:05 UTC
Description of problem:
It is not possible to have LVQ queue with ring policy. As using ring policy disables the LVQ feature (as all messages with the same LVQ key are stored in the queue).

Despite LVQ ordering somehow limits queue depth due to limited number of unique LVQ key values, there are still use cases where ring+LVQ makes sense.


Version-Release number of selected component (if applicable):
qpid-cpp-0.22-50


How reproducible:
100%


Steps to Reproduce:
address="LVQtest; {create:always, node:{type:queue, x-declare:{ arguments:{'qpid.flow_stop_count':0, 'qpid.max_count':100, 'qpid.last_value_queue':True, 'qpid.last_value_queue_key':'qpid.LVQ_key', 'qpid.policy_type':'ring', 'qpid.flow_resume_count':0, 'qpid.flow_stop_size':0, 'qpid.flow_resume_size':0}}}}"
qpid-send -a "$address" --group-key=key --group-interleave=2 --group-size=3 -m 101
qpid-stat -q LVQtest


Actual results:
queue-depth is 100


Expected results:
queue-depth should be 1


Additional info:
This regression comes from qpid refactor in QPID-4178, see src.code:

boost::shared_ptr<Queue> QueueFactory::create(const std::string& name, const QueueSettings& settings)
{
    settings.validate();

    //1. determine Queue type (i.e. whether we are subclassing Queue)
    // -> if 'ring' policy is in use then subclass
    boost::shared_ptr<Queue> queue;
    if (settings.dropMessagesAtLimit) {
        queue = boost::shared_ptr<Queue>(new LossyQueue(name, settings, settings.durable ? store : 0, parent, broker));
    } else if (settings.lvqKey.size()) {
        std::auto_ptr<MessageMap> map(new MessageMap(settings.lvqKey));
        queue = boost::shared_ptr<Queue>(new Lvq(name, map, settings, settings.durable ? store : 0, parent, broker));
    } else {
        queue = boost::shared_ptr<Queue>(new Queue(name, settings, settings.durable ? store : 0, parent, broker));
    }

Here, settings.dropMessagesAtLimit is true just for ring queue.

Comment 2 Pavel Moravec 2015-01-07 09:37:21 UTC
In case the change is intentional / lvq+ring is not supposed to be working, broker should deny such queue create request, to prevent false positive user experience.

Comment 3 Gordon Sim 2015-01-07 23:33:01 UTC
Fixed upstream by: https://svn.apache.org/r1650196

(Note: the ring queue discard if required happens before enqueue, the lvq replacement after it, so where the number of unique keys is less than the ring size, a new message may cause an old message to be discarded even though the  new message would in fact replace some other message). However this is considered to be acceptable since both behaviours are arguably honoured, even if the ring discard is not strictly logically necessary)

Comment 5 Zdenek Kraus 2015-01-30 19:13:42 UTC
Gordon,

can you please briefly describe the expected behaviour after fix?

Thank you.

Comment 9 errata-xmlrpc 2015-04-14 13:48:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2015-0805.html


Note You need to log in before you can comment on or make changes to this bug.