Affects: Release Notes Steps to Reproduce: - Start two nodes with standalone-ha.xml - Deploy quickstart cluster-ha-singleton-ejb - The timer is started once and log every ten seconds - stop the node with the timer (Ctrl-Z in the console) - wait for shunning and the timer start on the second - resume the node (command fg) - After the cluster is merged both timers are active project_key: JBPAPP6 If a SingletonService is deployed on both nodes it will start on one node only, also it works if nodes are leaving or joining the cluster if it is shutdowned, crashed or started. But in case of a cluster merge I expect that the method stop() will be called. But the service is still running on two nodes after merge.
Link: Added: This issue Cloned from AS7-5218
Workflow: Removed: GIT Pull Request workflow Added: jira Security: Added: Public Docs QE Status: Added: NEW
Issue is fixed and is awaiting testing against latest EAP DR.
Link: Added: This issue Cloned to JBPAPP-10152
Affects: Added: Release Notes
Release Notes Docs Status: Added: Not Yet Documented Writer: Added: tomwells
Hey Guys, Could someone please provide me with a summary of what caused this issue, what the consequence for the customer was, how it was fixed, and how SingletonService behaves differently now. Thanks.
Release Notes Docs Status: Removed: Not Yet Documented Added: Needs More Info
Hey Tom, a SingletonService is implemented as a service that will only run once in a clustered environment. This has worked as expected if cluster members are started and stopped by 'normal' admin operations. But if there are network problems (disconnected) and the cluster was splitted there are two clusters and each act as 'one and only' so the result is that two instances of the SingletonService are active, which is correct at this moment. After the members find together again there are two active instances and one of it must be stopped to respect the singleton approach, but the problem was that two (or more) instances stay active whitout respect to the elected cluster-master node which is unwanted. With the fix the instance which is running on the not elected node will be deactivated by calling the stop() method of the instance and the result is one active instance on the elected cluster-master node.
Release Notes Docs Status: Removed: Needs More Info Added: Documented as Resolved Issue Release Notes Text: Added: One instance of SingletonService is intended to run per cluster. If a cluster is split due to networking issues, one instance of the SingletonService would run in each cluster, which caused problems when the split cluster members rejoined the original cluster. The instance run in the non-elected cluster-master node is now deactivated by calling its stop() method, and there is only one active instance.
Verified on EAP 6.0.1 ER3
Release Notes Docs Status: Removed: Documented as Resolved Issue Writer: Removed: tomwells Release Notes Text: Removed: One instance of SingletonService is intended to run per cluster. If a cluster is split due to networking issues, one instance of the SingletonService would run in each cluster, which caused problems when the split cluster members rejoined the original cluster. The instance run in the non-elected cluster-master node is now deactivated by calling its stop() method, and there is only one active instance. Docs QE Status: Removed: NEW