Bug 610843 - JMS issues on 2nd node of a cluster
Summary: JMS issues on 2nd node of a cluster
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: RHQ Project
Classification: Other
Component: Alerts
Version: 3.0.0
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
: ---
Assignee: RHQ Project Maintainer
QA Contact: Mike Foley
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-07-02 15:04 UTC by Heiko W. Rupp
Modified: 2014-06-02 19:02 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-06-02 19:02:01 UTC
Embargoed:


Attachments (Terms of Use)

Description Heiko W. Rupp 2010-07-02 15:04:34 UTC
There are many messages in the jms tables to be delivered, that the other server is working on.
When starting the 2nd one, the following shows up:


10:27:19,645 INFO  [JmsActivation] Attempting to reconnect org.jboss.resource.adapter.jms.inflow.JmsActivationSpec@f99e17(ra=org.jboss.resource.adapter.jms.JmsResourceAdapter@1482a37 destination=queue/AlertConditionQueue isTopic=false tx=true durable=false reconnect=10 provider=java:/DefaultJMSProvider user=null maxMessages=1 minSession=1 maxSession=15 keepAlive=60000 useDLQ=true DLQHandler=org.jboss.resource.adapter.jms.inflow.dlq.GenericDLQHandler DLQJndiName=queue/DLQ DLQUser=null DLQMaxResent=5)
10:27:19,659 INFO  [JmsActivation] Reconnected with messaging provider.
10:27:19,662 WARN  [JmsActivation] Failure in jms activation org.jboss.resource.adapter.jms.inflow.JmsActivationSpec@f99e17(ra=org.jboss.resource.adapter.jms.JmsResourceAdapter@1482a37 destination=queue/AlertConditionQueue isTopic=false tx=true durable=false reconnect=10 provider=java:/DefaultJMSProvider user=null maxMessages=1 minSession=1 maxSession=15 keepAlive=60000 useDLQ=true DLQHandler=org.jboss.resource.adapter.jms.inflow.dlq.GenericDLQHandler DLQJndiName=queue/DLQ DLQUser=null DLQMaxResent=5)
org.jboss.mq.SpyJMSException: Could not update the message in the database: update affected 0 rows
        at org.jboss.mq.pm.jdbc2.PersistenceManager.update(PersistenceManager.java:1331)
        at org.jboss.mq.server.BasicQueue.updateRedeliveryFlags(BasicQueue.java:1212)
        at org.jboss.mq.server.BasicQueue.receive(BasicQueue.java:720)
        at org.jboss.mq.server.JMSQueue.receive(JMSQueue.java:185)
        at org.jboss.mq.server.ClientConsumer.receive(ClientConsumer.java:242)
        at org.jboss.mq.server.JMSDestinationManager.receive(JMSDestinationManager.java:628)
        at org.jboss.mq.server.JMSServerInterceptorSupport.receive(JMSServerInterceptorSupport.java:141)
        at org.jboss.mq.security.ServerSecurityInterceptor.receive(ServerSecurityInterceptor.java:115)
        at org.jboss.mq.server.TracingInterceptor.receive(TracingInterceptor.java:450)
        at org.jboss.mq.server.JMSServerInvoker.receive(JMSServerInvoker.java:147)
        at org.jboss.mq.il.jvm.JVMServerIL.receive(JVMServerIL.java:141)
        at org.jboss.mq.Connection.receive(Connection.java:868)
        at org.jboss.mq.SpyConnectionConsumer.run(SpyConnectionConsumer.java:277)
        at java.lang.Thread.run(Thread.java:595)

Comment 1 Corey Welton 2010-09-28 12:54:55 UTC
Heiko, can we get some repro steps here?

Comment 2 Heiko W. Rupp 2010-10-18 13:20:44 UTC
I think this happened when I had an error on alert sending (bad condition in 
the plugin like non-existing smtp server) in HA. 1 Server was running, the 
other was down and the JMS queue filled up.
Then the error condition was cleared up and the 1. Server was delivering 
messages. When the 2nd server started, this issue showed up.

But then JBossMQ is no known to be the most reliable JMS provider. 

It may be the case that we are not using the HA singleton service for JMS and 
that the JMS providers on the two HA servers are both looking into the same 
database table(s) and thus fighting for the queue entries there.

Comment 3 Jay Shaughnessy 2014-06-02 19:02:01 UTC
Now HornetQ and lots of backend updates, likely not an issue


Note You need to log in before you can comment on or make changes to this bug.