Bug 739898 - MDB suddenly stops processing messages
Summary: MDB suddenly stops processing messages
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise MRG
Classification: Red Hat
Component: qpid-java
Version: 2.0
Hardware: All
OS: All
unspecified
urgent
Target Milestone: ---
: ---
Assignee: messaging-bugs
QA Contact: MRG Quality Engineering
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-09-20 11:19 UTC by Martin Vecera
Modified: 2012-12-07 17:43 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-12-07 17:41:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Server log showing the problem. (166.34 KB, text/x-log)
2011-09-20 11:19 UTC, Martin Vecera
no flags Details
Server log with Qpid 0.14, January 11th 2012 (19.32 KB, application/x-gzip)
2012-01-11 07:50 UTC, Martin Vecera
no flags Details

Description Martin Vecera 2011-09-20 11:19:34 UTC
Created attachment 524002 [details]
Server log showing the problem.

I have a scenario where I send 100,000 messages to an MRG queue. I have a MDB on a server that should simply read all the messages and upon arrival of the last message, it should sent a confirmation message to another queue.

With the latest JCA RA (md5sum e1377e10acabe75f804cedfe5eea649b qpid-ra.rar) I experience a strange behavior.

First (and this is different from previous versions of the adapter), there are only 5 instances of MDB initialized. Given the big load, I would expect all 15 instances (default max pool size value) to be instantiated. However, 15 connections with MRG server is created (see the attached log file, starting time 2011-09-20 05:21:41,948).

Second (I never tried that with previous version of the adapter), the message processing stopped after ~53,000 messages with a warning message (see the attached log file, starting time 2011-09-20 05:47:28,647). Qpid server still reported the messages to be processed:
mrg_mrg_jca_mdb_transactions_in - msg:47.7k, msgIn:100k, msgOut: 52.3k

I tried to redeploy the application with MDB, but it was not possible to undeploy it (see the attached log file, starting time 2011-09-20 06:18:39,443 INFO).

After restarting the server (I had to kill it), the MDB started consuming the messages again. This time, only 4 instances of the MDB has been created.

Steps to Reproduce:
1. checkout https://svn.devel.redhat.com/repos/jboss-qa/mrg/tests/mdb
2. fix annotation in
/mdb-ejb/src/main/java/org/jboss/test/mdb/MRGJCAMessageBean.java to point to
your MRG server (this class is not used in the test, but could prevent
deployment)
3. compile with mvn -DskipTests clean package
4. copy mdb/mdb-test/target/mdb-test.ear to deploy directory of EAP 5.1.1
(profile production)
5. checkout https://svn.devel.redhat.com/repos/jboss-qa/mrg/qpid-java-jca
6. configure qpid-java-jca/qpid-jca-ds.xml to point to your MRG server
7. copy qpid-java-jca/qpid* to server's deploy directory
7. Start EAP
8. Go to http://localhost:8080/mdb-servlet/mrgtest?op=performance01 and see server log.

Comment 1 Andrew Stitcher 2011-09-21 19:50:30 UTC
Is this a duplicate of bug 739332?

Comment 2 Weston M. Price 2011-09-21 19:58:32 UTC
No, different issue. The deadlock issue of bug 739322 is only applicable for endpointActivation on startup, not after a listener (MDB) is deployed and running. The above issue appears to be purely performance based.

Comment 3 Weston M. Price 2011-09-22 13:13:20 UTC
A few things about this. Using QE's code, I was able to reproduce the issue. However

1) The issue (as expected) is in the JMS client, not the adapter. As such I am changing the component. 
2) This is a performance issue and as such, I am changing the milestone as 2.0.4 is probably not enough time to evaluate and possibly fix this issue in the 2.0.4 timeframe.

Comment 4 Weston M. Price 2011-09-22 14:08:11 UTC
FYI

A few more things as a result of some ad hoc testing. I was able to get the test to complete only after making the mrg_mrg_jca_mdb_transations_in queue non durable. Test completed with the following message from the Servlet:


performance01 125130 ACK

I believe we have known performance issues around durable destinations so I just wanted to make note that this definitely seems to influence the test results.

Also, the WARN message in the logs:

2011-09-20 06:50:23,974 ERROR [STDERR] (HDScanner) javax.jms.IllegalStateException: This method is not applicable inside the application server. See the J2EE spec, e.g. J2EE1.4 Section 6.6

Is due to the call 

@PreDestroy
   public void preDestroy() {
      try {
         connection.stop(); <-- here
         connection.close();
      } catch (JMSException e) {
         e.printStackTrace();
      }
   }

In the MRGTestServlet as the call to connection.stop() is only applicable to JEE application clients. However, this doesn't have anything to do with the test as a whole and can be ignored as being a factor.

Comment 5 Martin Vecera 2011-10-14 13:36:19 UTC
I removed calling the stop() method, however, it does not influence test results...

Comment 6 Martin Vecera 2012-01-11 07:50:39 UTC
Created attachment 552027 [details]
Server log with Qpid 0.14, January 11th 2012

I attached a fresh log from the server with Qpid client and server version 0.14. It seems that this time the message reading did not stop. However, I do not know how many messages has been read correctly - I'm not sure if there were any losses. I wonder why there are so many exception regarding refused connection.

Comment 7 Martin Vecera 2012-01-11 16:11:02 UTC
I discovered that there was a different problem in the last rerun. Now I verified that Qpid works fine and this is no longer an issue.


Note You need to log in before you can comment on or make changes to this bug.