Description of problem: After mounting somewhere between 15 and 23 GFS file
systems, clustat reports Resource Group informatation unavailable
Version-Release number of selected component (if applicable):
Every time when a certain number of GFS file systems are mounted. On one
cluster that number is 15. On another cluster that number is 23.
Steps to Reproduce:
1. Run clustat and verify services are shown
2. Mount 23 GFS file systems
3. Run clustat and see "Reource Group information is unavailable"
Cluster services not shown
Cluster services shown
Was rgmanager still running, or did it crash?
It was still running.
Could this bug be related to bug #171153? We have multiple applications that
run on every node in the cluster that all run clustat at various times. Can
we get the fix you just referred to for bug #171153?
*** Bug 175099 has been marked as a duplicate of this bug. ***
Created attachment 122042 [details]
magma-plugins was incorrectly reading /proc/cluster/services for sizes > 4096
Setting component to magma-plugins
Patches in CVS, head, stable, rhel4 -- setting to modified.
The fix doesn't work; you can't read >1page from /proc at a time.
However, there's an issue with issuing multiple reads to /proc/cluster/services,
which I've set as a dependency to this bugzilla. I can make the correct fix for
magma-plugins, but there's no guarantee it will work in all cases until #175372
Created attachment 122082 [details]
Correct patch against 1.0.2/1.0.3 which reads in page sizes.
This patch fixed the problem. Thanks!
The rgmanager-1.9.43-0 and magma-plugins-1.0.3-0.3bz175033 seem to be causing
other problems. With a cluster configured for four nodes, if I bring up only
three of the four nodes, the cluster does not get created even though there is
quorum. All three nodes start fenced but never return.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.