Description of problem: On RHEL 4U5, clustat always report rgmanger is running after a GULM cluster is started. With the configuration with DLM, clustat report correctly. How reproducible: Steps to Reproduce: 1. create a cluster with 3 nodes using GULM lock, set one of node is lock server 2. start the cluster on all the nodes(start ccsd, lock_gulmd only) 3. run clustat, or cluster -x, showing rgmanager is running on all started nodes Actual results: [root@node02 ~]# clustat msg_open: No route to host Member Status: Quorate Member Name Status ------ ---- ------ node04 Online, rgmanager node03 Offline node02 Online, Local, rgmanager ----------------------------------------------------------- [root@node02 ~]# clustat -x msg_open: No route to host <?xml version="1.0"?> <clustat version="4.1.1"> <quorum quorate="1" groupmember="1"/> <nodes> <node name="node04" state="1" local="0" estranged="0" rgmanager="1" nodeid="0xffff0000540610ac"/> <node name="node03" state="0" local="0" estranged="0" rgmanager="0" nodeid="0x0000000000000000"/> <node name="node02" state="1" local="1" estranged="0" rgmanager="1" nodeid="0xffff0000520610ac"/> </nodes> </clustat> Expected results: [root@node02 ~]# clustat Member Status: Quorate Member Name Status ------ ---- ------ node04 Online node03 Offline node02 Online, Local ----------------------------------------------------------- [root@node02 ~]# clustat -x <?xml version="1.0"?> <clustat version="4.1.1"> <quorum quorate="1"/> <nodes> <node name="node04" state="1" local="0" estranged="0" rgmanager="0" nodeid="0xffff0000540610ac"/> <node name="node03" state="0" local="0" estranged="0" rgmanager="0" nodeid="0x0000000000000000"/> <node name="node02" state="1" local="1" estranged="0" rgmanager="0" nodeid="0xffff0000520610ac"/> </nodes> </clustat> Additional info:
This is mostly because under gulm clusters, there's no such notion as virtual groups. If this is to be fixed, it needs to be done in the gulm magma plugin.
This would be de-stabilizing to fix in the gulm plugin. If we can come up with an easy way to fix this in the future, we will fix it, but for now, ->WONTFIX