Red Hat Bugzilla – Bug 495655
cluster does not free up sessions on failover
Last modified: 2011-06-29 10:16:27 EDT
Created attachment 339448 [details]
Description of problem:
If you use a session with an application specified name, then on failover to another cluster node try to open a new session with the same name, you get a SessionBusyException.
Even retrying after a short wait seems not to avoid this.
Steps to Reproduce:
1. start cluster with more than one node
2. have client that creates a session with a given name (and 0 timeout, which is default) and e.g. waits for a message
3. kill the node the client is connected to
4. have client failover to other node and create a new session with the same name as that used previously (which should have been destroyed)
The attached case provides such a client. You should be able to run that against a cluster (with a queue called test-queue created), then kill the node connected to.
Get a SessionBusyException as often as you try.
New session created with same name as that used previously, which should now have been destroyed.
If the session declares an exclusive (but not auto-deleted) queue, then it appears that lock on that queue is released as would be expected. However the name of the session is still marked as busy.
This was a regression. Fixed in r764783
Does this have test coverage?
No automated tests were added by r764783. I'll add one now.
Comment 3 is incorrect, there IS an automated regression test added by r764783:
Excellent, thanks! Closing.