Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1418976 - MDBs sending messages to themselves wait too long every maxSession times
Summary: MDBs sending messages to themselves wait too long every maxSession times
Alias: None
Product: JBoss Enterprise Application Platform 6
Classification: JBoss
Component: HornetQ
Version: 6.3.0
Hardware: All
OS: All
Target Milestone: ---
: ---
Assignee: jboss-set
QA Contact: Miroslav Novak
Depends On:
TreeView+ depends on / blocked
Reported: 2017-02-03 10:27 UTC by fuchs@kwsoft.de
Modified: 2017-02-06 15:51 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2017-02-06 15:51:27 UTC
Type: Bug

Attachments (Terms of Use)

Description fuchs@kwsoft.de 2017-02-03 10:27:12 UTC
Description of problem:
WHEN a MDB, upon receiving a message, sends further messages to its own queue
AND the amount of messages send to the queue exceed the amount specified in the activation config property "maxSession"
THEN the (one) next Message after maxSession will be acted upon only AFTER the initial onMessage returns.

We noticed this, because we expect to get answers on a temporary reply queue that we close in the initial onMessage.

This is bad also if one assumes that Messages are handled roughly in order, because if in this situation 500 Messages are sent, and the maxSession is at 15, then the 15th, 30th, 45th, etc. message will be acted upon after the 500th (or 499th).

It seems like that when there are Messages to be delivered but there are no available instances of MDBs, then the Messages are round-robin allocated to existing instances, for them to act upon as soon as they finish their current task. As the first Instance of this MDB is busy with waiting for the messages to be delivered and acted upon, it is basically waiting for itself and starves.

A solution could be that instances that have completed their queue take Messages from other instances or that the waiting queue is centralized and an Instance just takes the next one from the central queue.

Version-Release number of selected component (if applicable):
We can reproduce this in Wildfly as well, so we assume its present in all versions. EAP 6.3 is where we noticed it first.

How reproducible:
Every time.

Steps to Reproduce:
I created a project for this, here: https://github.com/kwFuchs/maxSessionBugProject
1. Create an MDB that sends Messages to itself and waits for them to finish
2. Send a Message to that MDB that causes it to send more Messages then the maxSession attribute.

Actual results:
Messages are acted upon after the first onMessage completes

Expected results:
Messages are acted upon before the first onMessage completes.

Additional info:
Two workarounds:
1. Use an own Queue for the second part. (in my example project: TestStack should be its own MDB)
2. Increase maxSession to a very large number.

Comment 1 Clebert Suconic 2017-02-03 13:27:35 UTC
This is a fundamental problem on your application, and there is no way to fix it.

You are on a MessageListener, sending to your own queue and creating a consumer on the same queue that won't be delivered until you satisfy your current selection. It's a starvation for your application, nothing that we can do about it.

Anyway, another workaround would be to set consumerWindowSize=0, at least you won't have batches on the client.. but there's no guarantees you would receive the same message you just sent.

Comment 2 fuchs@kwsoft.de 2017-02-03 13:30:04 UTC
It works fine on all Application Servers except JBoss / Wildfly.

Comment 3 Clebert Suconic 2017-02-03 13:48:35 UTC
this looks like a TX problem to me as well. Open a new TX to send. I don't see how this is not an app problem. I have worked with a lot of Messaging Systems (including competitors) and this would be an anti-pattern on any of them.

Anyway, if you are using a specific Messaging System bad semantics, still your app issue IMHO.

I will let QA to decide what to do with this

Comment 4 Miroslav Novak 2017-02-03 14:21:53 UTC
Clebert is right. Note that by default everything you do in your MDB is part of XA transaction. So send to TestStackStarterQueue in handleStartOfParalellStacks() is not finished (better say committed) until onMessage is finished. When you wait in handleStartOfParalellStacks() to receive something you don't give a chance the previous "send" messages to be committed.

Note that queueSession = queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE); is ignored when MDB is in transaction context.

Could you annotate your StackStarterMDB like:
@TransactionManagement(value = TransactionManagementType.CONTAINER)
@TransactionAttribute(value = TransactionAttributeType.NOT_SUPPORTED)
public class StackStarterMDB implements MessageListener {

It will tell the server not to process messages in XA transaction and your code will work.

Comment 5 fuchs@kwsoft.de 2017-02-06 09:11:33 UTC
I used the transaction annotations as requested. Same result, every maxSession-th message failed.
In our actual application we use Bean Managed transaction, so I tried that as well. Same result.

I had omitted the Transaction Management details from the example program as I didn't thought it would be relevant to reproduce the situation, I apologize if that caused additional work for you.

If transactions were the problem, I would assume that the messages that fail to run are not directly associated with the maxSession property.
e.G. if I have a maxSession of 15 and send 20 messages, it fails every time. If I have maxSession of 25 and send 20 messages, it works every time.

While I tested this Transaction changes I noticed the following:
The Thread that manages the Messages that get stuck is always the same that manages the initial message, while the messages that do get processed in time never use this particular thread.

> [mdbs.StackStarterMDB] (Thread-18 (HornetQ-client-global-threads-501607829)) Open temporary message queue...
> [mdbs.TestStack] (Thread-28 (HornetQ-client-global-threads-501607829)) starting message: start process 0
> [mdbs.TestStack] (Thread-27 (HornetQ-client-global-threads-501607829)) starting message: start process 1
> [...]
> [mdbs.StackStarterMDB] (Thread-18 (HornetQ-client-global-threads-501607829)) did not receive all replies
> [mdbs.StackStarterMDB] (Thread-18 (HornetQ-client-global-threads-501607829)) Close temporary message queue.
> [mdbs.TestStack] (Thread-18 (HornetQ-client-global-threads-501607829)) starting message: start process 14 will think for 20000 ms.

In this example Thread-18 was used for the initial message, that opened and closed the temporary reply queue. It also managed the messages that were multiples of 15 (starting from 0) which is also the maxSession value. Messages 0 and 1 where handled by Thread-28 and -27 instead.

Comment 6 fuchs@kwsoft.de 2017-02-06 09:15:51 UTC
It seems to me that Messages are assigned to Threads the moment they are received.

If that is the case, then a performance issue could lurk in the background of this bug:

In the case that one sends messages to a MDB that have very different times they require, it is possible that the long running actions all pile up on one Thread, while the others idle.

Comment 7 fuchs@kwsoft.de 2017-02-06 09:30:20 UTC
(In reply to Clebert Suconic from comment #1)

> Anyway, another workaround would be to set consumerWindowSize=0, at least
> you won't have batches on the client.. but there's no guarantees you would
> receive the same message you just sent.

Thank you for this suggestion, it did work. I'll investigate its implications further.

Comment 8 Clebert Suconic 2017-02-06 15:51:27 UTC
This is not a workaround Fuchs... it's a fundamental problem. if you are using fetch ahead on messages, messages will be delivered in batches to that consumer, on the case itself.

your other application servers were not fetching ahead... so if you need this kind of semantic just disable fetch ahead.

I would redesign your application though. it's an anti-pattern and some sort of mess for even other application servers. you have no guarantees of the MDB that will receive your message.. it's a fragile app.

Anyway, i will close this as not a bug.. disable fetch ahead messages and you get the same semantics. enable this on any message broker and you get the same issue, guaranteed.

Note You need to log in before you can comment on or make changes to this bug.