Description of problem: - With both 2 and 3 node clusters (and suspect this is true for any number of nodes in a cluster)when a member is manually fenced off clustat hangs on the nodes still in quorum. - All gfs related operations also hang - which we would expect, but we still anticipate that clustat needs to still function and give accurate status. Version-Release number of selected component (if applicable): kernel - 2.6.9-11.EL_smp gfs - 6.1 cluster suite - 4 How reproducible: Everytime Steps to Reproduce: 1. configuration a 2 or 3 node cluster (believe it will be the same on any number though) for manual fencing 2. pull the heartbeat or do something to stop the heartbeat communication to a member. 3. verify the member was fenced off 4. go to another that should still have quorum and try executing clustat Actual results: clustat hangs Expected results: - clustat should not hang and show the fenced off node no longer in the cluster. Additional info: please let me know if any additional info is required to reproduce. this is a high priority item for us. thanks in advance. Jim
clustat is really a piece of rgmanager, which will stop during transitions if GFS is in use. clustat should probably time out after a few seconds of trying to reach clurgmgrd.
Created attachment 118651 [details] strace of a clustat hang This clustat hang occurred while running the test described in bug #166701.
Created attachment 118652 [details] strace of a clustat hang This clustat hang occurred while running the test described in bug #166701.
Created attachment 118653 [details] strace of clusvcadm hang This clusvcadm hang occurred while running the test described in bug #166701.
Created attachment 118654 [details] strace of clusvcadm hang This clusvcadm hang occurred while running the test described in bug #166701.
Created attachment 122339 [details] This is a DLM hang, not an rgmanager/clustat problem per se. Rgmanager goes into D (disk-wait/task-uninterruptible state) waiting on the DLM; here's a Sysrq-T when this happens.
With luck this will turn out to be the same bug as #175805. Has anybody tested with this fix in place ?