This may be addressed by https://bugzilla.redhat.com/show_bug.cgi?id=660289 which allows a QF event message to be sent out when a queue reaches a particular limit. By default that limit is 80% of any configured size (i.e. it would be sent *before* any message loss) but that can be configured on a per-queue basis. Would this address the customers use case?
To get a QMF alert in an application, one will have to listen for QMF events too, right? I'm not sure that is what they're looking for, but I'll check if they can make do with this in the meantime.
Yes, but with QMFv2 that can be done relatively easily using the messaging API. I.e. you create a receiver with a specific source address and then handle the incoming map formatted event messages.
The customer needs a way to set ACLs that filter events for applications based on the objects on which the events occurred. So if an application is attached to queue q1, it should only be allowed to get qmf events on q1, not on any other queue q2.
Another implementation suggestion would be to send the deleted messages to the alternate exchange associated with a queue. That would let any application detect them by binding to that exchange as appropriate. That is also more susceptible to ACL control. To avoid this mechanism causing build up of messages that the ring policy is attempting to avoid, the queue bound to the alternate exchange could have be a ring queue of size 1, meaning it only ever held the last message which I think would be sufficient to notify of the dropping of messages. Just a suggestion, other approaches are of course possible. This however would be simple to implement.
Customer feedback: Regarding the solutions suggested in the JIRA (from my point of view) ... a) The QMF broadcast would sound as a reasonable solution to me. But QMF has issues with the access control (see case 748413 [bug 883866 - Ray]) - therefore it might not be easy to use it (at least for us). b) I can imagine to create a construct using an alternate exchange to receive some of the overwritten messages as a sign that the overwriting is happening. But such construct will be not exactly user friendly and will probably involve another ring-type queue. This will get additionally complicated when you are working with multiple ring-type queues receiving the same messages at the same time - some of them having the messages overwritten and some of them not. Also, it is a bit "overkill" to resend all the messages to alternate exchange just to give a "signal" that they are being overwritten (when you create the ring type queue, you expect the messages to get lost - you just want to know when it happens). The solutions I would imagine as a better options would probably be ... I) Assigning a new custom header to the messages which will mark that some messages were overwritten (i.e. something similar to the redelivered flag). II) Adding a queue level sequencing which would work in a similar way as the exchange level sequencing, but the sequence IDs will be assigned only to the messages which are routed into the queue (+ the sequence will be persisted to possibly survive the restart of the broker - see 609310 [bug 800832 / bug 800322 - Ray]). The sequence IDs can be used by the client to detect gaps. (this is from my perspective the best solution)
Created attachment 748333 [details] Adding a way to store and retrieve queue level sequencing in message annotations Implements solution II) from comment 26. This does not address persisting to survive a broker restart.
http://svn.apache.org/r1485001
We've discovered problems with this implementation. 1. This implementation is not feature complete. It does not support persistence / broker restart. So address this issue, or add this note to Doc Text, to let every one know that this will be not implemented. 2. The persistent scenario is broken. Throws unexpected exception "qpid-send: framing-error: Queue "q": Dequeuing message with null persistence Id. (/builddir/build/BUILD/qpid-0.22/cpp/src/qpid/legacystore/MessageStoreImpl.cpp:1370) " Reproducing scenario: 1. create queue with durable=true, policy=ring, max_count=5, queue_msg_sequence: 'msgsq' ./qc2_drain "q;{create:always, node:{type:queue, durable:true, x-declare:{arguments:{'qpid.max_count': 5, qpid.queue_msg_sequence: 'msgsq', qpid.policy_type: 'ring' }}}}" 2. send 6 messages, causing the first one to be overwritten ./qc2_qpid-send --durable yes -m 6 -a "q" 2013-07-25 10:20:20 [Client] warning Broker closed connection: 501, Queue "q": Dequeuing message with null persistence Id. (/builddir/build/BUILD/qpid-0.22/cpp/src/qpid/legacystore/MessageStoreImpl.cpp:1370) qpid-send: framing-error: Queue "q": Dequeuing message with null persistence Id. (/builddir/build/BUILD/qpid-0.22/cpp/src/qpid/legacystore/MessageStoreImpl.cpp:1370) 3. then try to read messages from that queue, receiver has same problem with reading ./qc2_qpid-receive --print-headers yes -a "q" Durable: true Properties: {msgsq:2, sn:2, ts:1374740420923996565, x-amqp-0-10.routing-key:q} Durable: true Properties: {msgsq:3, sn:3, ts:1374740420924015482, x-amqp-0-10.routing-key:q} Durable: true Properties: {msgsq:4, sn:4, ts:1374740420924476712, x-amqp-0-10.routing-key:q} Durable: true Properties: {msgsq:5, sn:5, ts:1374740420924496238, x-amqp-0-10.routing-key:q} 2013-07-25 10:21:05 [Client] warning Broker closed connection: 501, Queue "q": Dequeuing message with null persistence Id. (/builddir/build/BUILD/qpid-0.22/cpp/src/qpid/legacystore/MessageStoreImpl.cpp:1370) qpid-receive: Failed to connect (reconnect disabled) #you could repeat that 4 times, and exceptions is still thrown -> ASSIGNED
the Issue (2) is described by Bug 702656
For issue 1: Updated doc text to indicate that persistence of the sequence number is not supported. For issue 2: Kim, this appears to be the same error that was fixed in bug 702656. I am modifying the message headers when it is pushed onto the queue. Could that be the cause of the problem? Note: This error only happens when: - the store is loaded - the queue is declared as durable - the queue is a ring queue - the message header is modified when the message is pushed onto the queue (because the queue was declared with the qpid.queue_msg_sequence header property) - the queue is at max messages and another message is added
Issue two is a general bug in the adding of annotations to durable messages after they have been written to store. See https://issues.apache.org/jira/browse/QPID-5041 which is now fixed upstream by https://svn.apache.org/r1510696.
Irina, could you please check on the Comment #42 question?
This feature itself is okay, Tested on RHEL 6.4 i686 & x86_64 with packages: perl-qpid-0.22-5.el6.x86_64 perl-qpid-debuginfo-0.22-5.el6.x86_64 python-qpid-0.22-5.el6.noarch python-qpid-qmf-0.22-15.el6.x86_64 qpid-cpp-client-0.22-19.el6.x86_64 qpid-cpp-client-devel-0.22-19.el6.x86_64 qpid-cpp-client-devel-docs-0.22-19.el6.noarch qpid-cpp-client-rdma-0.22-19.el6.x86_64 qpid-cpp-client-ssl-0.22-19.el6.x86_64 qpid-cpp-debuginfo-0.22-19.el6.x86_64 qpid-cpp-server-0.22-19.el6.x86_64 qpid-cpp-server-devel-0.22-19.el6.x86_64 qpid-cpp-server-ha-0.22-19.el6.x86_64 qpid-cpp-server-rdma-0.22-19.el6.x86_64 qpid-cpp-server-ssl-0.22-19.el6.x86_64 qpid-cpp-server-store-0.22-19.el6.x86_64 qpid-cpp-server-xml-0.22-19.el6.x86_64 qpid-cpp-tar-0.22-16.el6.noarch qpid-java-client-0.23-3.el6.noarch qpid-java-common-0.23-3.el6.noarch qpid-java-example-0.23-3.el6.noarch qpid-proton-c-0.5-6.el6.x86_64 qpid-qmf-0.22-15.el6.x86_64 qpid-tools-0.22-6.el6.noarch but It could be truly verified, after Str/Int blocker is resolved
Blocking Str/Int issue was resolved, and this was additionally tested on Rhel 6.4 i686 and x86_64 with packages: perl-qpid-0.22-5.el6 perl-qpid-debuginfo-0.22-5.el6 python-qpid-0.22-5.el6 python-qpid-qmf-0.22-20.el6 qpid-cpp-client-0.22-23.el6 qpid-cpp-client-devel-0.22-23.el6 qpid-cpp-client-devel-docs-0.22-23.el6 qpid-cpp-client-rdma-0.22-23.el6 qpid-cpp-client-ssl-0.22-23.el6 qpid-cpp-debuginfo-0.22-23.el6 qpid-cpp-server-0.22-23.el6 qpid-cpp-server-devel-0.22-23.el6 qpid-cpp-server-ha-0.22-23.el6 qpid-cpp-server-rdma-0.22-23.el6 qpid-cpp-server-ssl-0.22-23.el6 qpid-cpp-server-store-0.22-23.el6 qpid-cpp-server-xml-0.22-23.el6 qpid-cpp-tar-0.22-16.el6 qpid-java-client-0.23-4.el6 qpid-java-common-0.23-4.el6 qpid-java-example-0.23-4.el6 qpid-proton-c-0.5-6.el6 qpid-qmf-0.22-20.el6 qpid-tools-0.22-6.el6 and behave as expected. -> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1296.html