This enhancement is based on bz#912688. If EAP 6.1.0 server has MDB which has configured HornetQ resource adapter connected to another server in hq cluster then if any node in the cluster doesn't have one queue/topic which MDB is using deployed, the MDB starts throw NPEs.
I don't understand how this could be. BZ912688 was happening because of the same issues you found on SpecJMS before the release where the destination was not being cleared. At this point I have no clue how to work on this issue as I have no clue how to replicate it... I will assign it back to you since I need more information.
Steps to Reproduce: 0. Download and unzip reproducer.zip from attachemnt. Next steps excexute in unzipped "reproducer" directory 1. First create an IP alias by "ifconfig eth1:0 192.168.40.1 up" under root user (change command according your environment) 2. run - "sh prepare.sh" - this downloads EAP 6.1.0.GA - creates 3 directories server1,2,3 - copies directory jboss-eap-6.1 to server1,2,3 - copies configuration standalone-full-ha-jms.xml to server1 - copies configuration standalone-full-ha-mdb.xml to server2 - copies mdb1.jar to server2's deployments directory 3. start first server by "sh start-server1.sh 192.168.40.1" 4. start thrird server by "sh start-server3.sh <third_ip>" 5. start second server by "sh start-server2.sh <second_ip>" 6. start jms producer by "sh start-producer.sh 192.168.40.1 1000" 7. see exceptions in server2 log It's important to let server1 and server3 create a cluster before server3 is started otherwise exceptions won't occur.
Created attachment 755223 [details] reproducer.zip
From feedback from dev (bz#912688) this is not an issue. I've set jboss-eap-6.1.1 flag not to loose focus from this bz.
My previous comment may look confusing without knowing context. By default HornetQ RA load-balances connections to all nodes in cluster. When one node does not have specific destinations deployed then HQ RA starts to throw NPEs and breaks the delivery to MDB which is using those destinations. MDB does not receive any message. Purpose of this BZ is to make HornetQ RA more robust when it connects to node in cluster which does not have deployed destinations needed by MDB so it can
... deliver messages from "well configured" nodes.
nodes arent aware of the configuration of backup nodes and since the backup may not be available when the live startes there is not much we can do on the live. We could however a check for the queue on failover and if its not present stop the MDB and log a warning.
After irc chat with Andy, dev will discuss possible solutions for this.
Created attachment 792818 [details] patch to stop partial init of MDB The issue is that if in HA and sessions are round robinned if the 1st succeeds and is started the second may fail. At this point we stop the MSB but the first session is akready handling messages. This patch postpones the starting of the session until all sessions are created and then starts them all together. If there is a single failure we stop the whole MDB rather than partially init it.
*** Bug 914758 has been marked as a duplicate of this bug. ***
Andy: it seems that the patch makes sense. Can you do it?
pr sent - https://github.com/hornetq/hornetq/pull/1287
Tested with EAP 6.2.0.ER6 / HornetQ 2.3.9