Bug 453538
Summary: | [RFE] Support for message priority | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise MRG | Reporter: | Gordon Sim <gsim> | ||||||
Component: | qpid-cpp | Assignee: | Gordon Sim <gsim> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Frantisek Reznicek <freznice> | ||||||
Severity: | medium | Docs Contact: | |||||||
Priority: | medium | ||||||||
Version: | 1.0 | CC: | agoldste, cww, esammons, freznice, iboverma, jeder, jneedle, mhusnain, tao, tross | ||||||
Target Milestone: | 2.0 | Keywords: | FutureFeature, Triaged | ||||||
Target Release: | --- | ||||||||
Hardware: | All | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | qpid-cpp-mrg-0.9.1073306-1 | Doc Type: | Enhancement | ||||||
Doc Text: |
Cause:
Broker does not take any account of signaled message priority.
Consequence:
Applications are unable to rely on broker to deliver higher priority messages before lower priority ones and are forced to implement more convoluted logic to workaround this.
Change:
Broker can now be configured such that particular queues recognize message priority and adjust delivery appropriately.
Result:
Applications that need prioritized delivery can get this directly from the brokers queue implementation.
Release Note Entry:
Previously, the Messaging Broker did not take signaled message priority into account during message delivery. The Broker can now be configured to recognize higher priority messages and adjust delivery accordingly.
|
Story Points: | --- | ||||||
Clone Of: | |||||||||
: | 622829 (view as bug list) | Environment: | |||||||
Last Closed: | 2011-06-23 15:47:05 UTC | Type: | --- | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | 683594 | ||||||||
Bug Blocks: | 622829 | ||||||||
Attachments: |
|
Description
Gordon Sim
2008-07-01 08:55:02 UTC
The AMQP 0-10 specification requires at least two levels. We'd additionally like to be able to get messages out of the queue using an algorithm that can help avoid starvation of lower priority messages. We'd like to be able to specify that a priority queue should delivery up to "n" messages per priority level per cycle (or all messages, to use the standard/default behavior). This way, we could get e.g. up to 10 high priority messages, then up to 10 medium priority messages, then up to 10 low priority messages, then repeat. If there are more than 10 messages in a given priority level, leave the remainder in that priority level and move on to the next one. This configuration option can help avoid starvation of the lower priorities (if there are lots of high priority messages, those with lower priorities might never be delivered). A queue can now be created to recognise a specified number of distinct priorities. The argument (passed to declare) for this is qpid.priorities (with x-qpid-priorities as an alias to comply with the precedent set by Qpid Java Broker). Messages will be delivered in priority order to consumers. Browsers currently see messages in FIFO order however the browse ordering should be considered undefined (and not relied on). To test, create a queue with qpid.priorities=10 (can use new --argument option to qpid-config, or use address option or use new QMF create method on broker), send messages with different priorities (p where 0 < p < 9) then consume those messages and verify they are delivered with highest priority first. Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: Cause: Broker does not take any account of signalled message priority. Consequence: Applications are unable to rely on broker to deliver higher priority messages before lower priority ones and are forced to implement more convoluted logic to workaround this. Change: Broker can now be configured such that particular queues recognise message priority and adjust delivery appropriately. Result: Applications that need prioritised delivery can get this directly from the brokers queue implementation. *** Bug 578621 has been marked as a duplicate of this bug. *** The feature testing split into four parts: a] running all available queue priority related unit tests b] more focused test on simple priority w/o fairshare c] more focused test on simple priority with common fairshare (x-qpid-fairshare : <N>) d] more focused test on simple priority with custom fairshare (x-qpid-fairshare-<i> : <Ni>) Points a], b], c] are working as expected. There are some problems with test d] showing that for part of the tests broker behaves properly, but then suddenly priority ordering turns to something unexplainable... Let's see the case d] example: setup: loops:10, msg_cnt:15, msg_pri_level_cnt:10, fairshare:[3, 5, 3, 2, 4, 5, 5, 3, 5, 4], addr:test_queue_priority_failshare_custom_10__-address-9; {node: {x-declare: {auto-delete: True, exclusive: False, arguments: {x-qpid-fairshare-0: 3, x-qpid-fairshare-1: 5, x-qpid-fairshare-2: 3, x-qpid-fairshare-3: 2, x-qpid-fairshare-4: 4, x-qpid-fairshare-5: 5, x-qpid-fairshare-6: 5, x-qpid-fairshare-7: 3, x-qpid-fairshare-8: 5, x-qpid-fairshare-9: 4, x-qpid-priorities: 10}}, durable: False}, create: sender, delete: receiver} >tx>'msg:0000, pri:1'< >tx>'msg:0001, pri:3'< >tx>'msg:0002, pri:None'< >tx>'msg:0003, pri:8'< >tx>'msg:0004, pri:9'< >tx>'msg:0005, pri:3'< >rx>'msg:0001, pri:3' vs. 'msg:0004, pri:9'< >tx>'msg:0006, pri:6'< >tx>'msg:0007, pri:0'< >rx>'msg:0005, pri:3' vs. 'msg:0003, pri:8'< >tx>'msg:0008, pri:8'< >tx>'msg:0009, pri:0'< >rx>'msg:0000, pri:1' vs. 'msg:0008, pri:8'< >tx>'msg:0010, pri:4'< >rx>'msg:0002, pri:None' vs. 'msg:0006, pri:6'< >tx>'msg:0011, pri:5'< >rx>'msg:0007, pri:0' vs. 'msg:0011, pri:5'< >rx>'msg:0009, pri:0' vs. 'msg:0010, pri:4'< >tx>'msg:0012, pri:2'< >rx>'msg:0004, pri:9' vs. 'msg:0001, pri:3'< >rx>'msg:0003, pri:8' vs. 'msg:0005, pri:3'< >tx>'msg:0013, pri:2'< >rx>'msg:0008, pri:8' vs. 'msg:0012, pri:2'< >rx>'msg:0006, pri:6' vs. 'msg:0013, pri:2'< >tx>'msg:0014, pri:7'< >rx>'msg:0011, pri:5' vs. 'msg:0000, pri:1'< >rx>'msg:0010, pri:4' vs. 'msg:0002, pri:None'< >rx>'msg:0012, pri:2' vs. 'msg:0007, pri:0'< >rx>'msg:0013, pri:2' vs. 'msg:0009, pri:0'< >rx>'msg:0014, pri:7'< Legend: >tx>'...'< means transmission client -> broker >rx>'...'< means reception without problem >rx>'%s vs. %s '< means reception error (former %s) reality mismatches the model (later %s) Gordon, could you check the above tx & rxes to confirm that this is incorrect, please? Raising the NEEDINFO. Created attachment 496506 [details]
The developed custom unit tests showing the issue with point d]
The attachment contains test code and logs from RHEL5.6 i/x.
To run in verbose mode:
CUST_PY_UNITS_DEBUG=1 PYTHONPATH=${PYTHONPATH}:custom_py_units qpid-python-test -m custom_py_units custom_py_units.queue.HLAPITests.test_queue_priority*
To run in normal mode:
PYTHONPATH=${PYTHONPATH}:custom_py_units qpid-python-test -m custom_py_units custom_py_units.queue.HLAPITests.test_queue_priority*
Currently the test lists all messages sent / received to be able to debug, for that purpose the assertions were disabled.
Feel free to enable on line custom_py_units/queue.py:1425
-if (False):
+if (True):
Running that sequence of inputs and outputs, I see the expected sequence delivered (i.e. the 'y' in the 'x vs. y' pattern). I used qpid-send and drain rather than the test code. I wonder therefore if there is some other aspect of the test that is relevant here (can't see anything as yet)... That is interesting. I'm suspicious that there might be issue which does not occur immediately as my test showed failures after couple of runs. More precisely the case d] consists of 10 runs for different addresses (durability / exclusivity/...) and as far as I saw broker start to fail around 3/4 of the runs. Could it be contamination between runs? E.g. queue left from previous run so that re-declare with new values has no effect? Looks to me like a test issue. The sender and receivers are not closed meaning the session will cache them against the address node name which is the same for every iteration in the loop. If you add in an r.close() and s.close at line 1552 (i.e. at the end of each loop), you get different behaviour. Still fails, but this time its a queue-not-found exception. Looks like that may be related to the create/delete behaviour and the auto-deletion flag. There is some issue here with auto-delete=True, exclusive=False and the HLAPI testlibs cleanup method. Can't quite put my finger on it, but these three together result in the queue-not-found errors. Appears that somehow the auto-creation is skipped on the second use of a queue name with above properties. Whether this is an issue with the python client or with the test/testlib I can't quite tell yet. However it has nothing to do with the broker nor with priority queues as far as I can tell. I've reproduced something very similar in a simple test case: https://issues.apache.org/jira/browse/QPID-3242. Its a separate issue to the feature in question here (though at present it holds up this particular test). Created attachment 496709 [details]
A fix to the test and a change to the test framework
Replacing the address based approach to cleanup with a QMF based approach avoids the issue above and allows this test to complete successfully. I have attached a patch representing my changes to the tests. With those in place it passes reliably for me.
Indeed, you are right, after unit test fix I can see the whole feature perfectly working, all the points a], b], c], d] are tested on RHEL 5.6 / 6.1s5 i[36]86 / x86_64 on packages: python-qpid-0.10-1.el5.noarch python-qpid-qmf-0.10-6.el5.x86_64 qpid-cpp-client-0.10-4.el5.x86_64 qpid-cpp-client-devel-0.10-4.el5.x86_64 qpid-cpp-client-devel-docs-0.10-4.el5.x86_64 qpid-cpp-client-rdma-0.10-4.el5.x86_64 qpid-cpp-client-ssl-0.10-4.el5.x86_64 qpid-cpp-mrg-debuginfo-0.10-4.el5.x86_64 qpid-cpp-server-0.10-4.el5.x86_64 qpid-cpp-server-cluster-0.10-4.el5.x86_64 qpid-cpp-server-devel-0.10-4.el5.x86_64 qpid-cpp-server-rdma-0.10-4.el5.x86_64 qpid-cpp-server-ssl-0.10-4.el5.x86_64 qpid-cpp-server-store-0.10-4.el5.x86_64 qpid-cpp-server-xml-0.10-4.el5.x86_64 qpid-dotnet-0.4.738274-2.el5.x86_64 qpid-java-client-0.10-4.el5.noarch qpid-java-common-0.10-4.el5.noarch qpid-java-example-0.10-4.el5.noarch qpid-qmf-0.10-6.el5.x86_64 qpid-qmf-debuginfo-0.10-6.el5.x86_64 qpid-qmf-devel-0.10-6.el5.x86_64 qpid-tests-0.10-1.el5.noarch qpid-tools-0.10-4.el5.noarch rh-qpid-cpp-tests-0.10-4.el5.x86_64 ruby-qpid-qmf-0.10-6.el5.x86_64 sesame-0.10-1.el5.x86_64 sesame-debuginfo-0.10-1.el5.x86_64 Waiting for linked documentation bug 683594 which needs to be resolved first. All dependencies resolved/finished. -> VERIFIED Technical note updated. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. Diffed Contents: @@ -1,11 +1,15 @@ Cause: -Broker does not take any account of signalled message priority. +Broker does not take any account of signaled message priority. Consequence: Applications are unable to rely on broker to deliver higher priority messages before lower priority ones and are forced to implement more convoluted logic to workaround this. Change: -Broker can now be configured such that particular queues recognise message priority and adjust delivery appropriately. +Broker can now be configured such that particular queues recognize message priority and adjust delivery appropriately. Result: -Applications that need prioritised delivery can get this directly from the brokers queue implementation.+Applications that need prioritized delivery can get this directly from the brokers queue implementation. + +Release Note Entry: + +Previously, the Messaging Broker did not take signaled message priority into account during message delivery. The Broker can now be configured to recognize higher priority messages and adjust delivery accordingly. Technical note can be viewed in the release notes for 2.0 at the documentation stage here: http://documentation-stage.bne.redhat.com/docs/en-US/Red_Hat_Enterprise_MRG/2.0/html-single/MRG_Release_Notes/index.html#tabl-MRG_Release_Notes-RHM_Update_Notes-RHM_Update_Notes An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHEA-2011-0890.html |