Bug 175033 - clustat reports Resource Group information unavailable after N gfs file systems are mounted
clustat reports Resource Group information unavailable after N gfs file syste...
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: magma-plugins (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Lon Hohberger
Cluster QE
Depends On: 175372
  Show dependency treegraph
Reported: 2005-12-05 16:00 EST by Henry Harris
Modified: 2009-04-16 16:18 EDT (History)
1 user (show)

See Also:
Fixed In Version: RHBA-2006-0172
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2006-01-06 15:30:36 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
magma-plugins was incorrectly reading /proc/cluster/services for sizes > 4096 (1.15 KB, patch)
2005-12-08 14:33 EST, Lon Hohberger
no flags Details | Diff
Correct patch against 1.0.2/1.0.3 which reads in page sizes. (2.01 KB, patch)
2005-12-09 11:44 EST, Lon Hohberger
no flags Details | Diff

  None (edit)
Description Henry Harris 2005-12-05 16:00:35 EST
Description of problem: After mounting somewhere between 15 and 23 GFS file 
systems, clustat reports Resource Group informatation unavailable

Version-Release number of selected component (if applicable):

How reproducible:
Every time when a certain number of GFS file systems are mounted.  On one 
cluster that number is 15.  On another cluster that number is 23.

Steps to Reproduce:
1. Run clustat and verify services are shown
2. Mount 23 GFS file systems
3. Run clustat and see "Reource Group information is unavailable"
Actual results:
Cluster services not shown

Expected results:
Cluster services shown

Additional info:
Comment 1 Lon Hohberger 2005-12-06 09:29:00 EST
Was rgmanager still running, or did it crash?
Comment 2 Henry Harris 2005-12-06 10:56:53 EST
It was still running.
Comment 3 Henry Harris 2005-12-06 16:13:22 EST
Could this bug be related to bug #171153?  We have multiple applications that 
run on every node in the cluster that all run clustat at various times.  Can 
we get the fix you just referred to for bug #171153?
Comment 4 Lon Hohberger 2005-12-08 14:32:00 EST
*** Bug 175099 has been marked as a duplicate of this bug. ***
Comment 5 Lon Hohberger 2005-12-08 14:33:55 EST
Created attachment 122042 [details]
magma-plugins was incorrectly reading /proc/cluster/services for sizes > 4096
Comment 6 Lon Hohberger 2005-12-08 14:34:40 EST
Setting component to magma-plugins
Comment 7 Lon Hohberger 2005-12-08 14:38:09 EST
Patches in CVS, head, stable, rhel4 -- setting to modified.
Comment 9 Lon Hohberger 2005-12-09 11:30:34 EST
The fix doesn't work; you can't read >1page from /proc at a time.

However, there's an issue with issuing multiple reads to /proc/cluster/services,
which I've set as a dependency to this bugzilla.  I can make the correct fix for
magma-plugins, but there's no guarantee it will work in all cases until #175372
is fixed.
Comment 10 Lon Hohberger 2005-12-09 11:44:24 EST
Created attachment 122082 [details]
Correct patch against 1.0.2/1.0.3 which reads in page sizes.
Comment 11 Henry Harris 2005-12-09 19:52:08 EST
This patch fixed the problem.  Thanks!
Comment 12 Henry Harris 2005-12-13 18:24:43 EST
The rgmanager-1.9.43-0 and magma-plugins-1.0.3-0.3bz175033 seem to be causing 
other problems.  With a cluster configured for four nodes, if I bring up only 
three of the four nodes, the cluster does not get created even though there is 
quorum.  All three nodes start fenced but never return. 
Comment 13 Red Hat Bugzilla 2006-01-06 15:30:36 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.