This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 436381 - Quorum not formed before gfs starts with one node in a "4-node + qdisk" configuration
Quorum not formed before gfs starts with one node in a "4-node + qdisk" confi...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: cman (Show other bugs)
5.1
x86_64 Linux
medium Severity low
: rc
: ---
Assigned To: Lon Hohberger
GFS Bugs
:
Depends On:
Blocks: 469103
  Show dependency treegraph
 
Reported: 2008-03-06 16:17 EST by Afom T. Michael
Modified: 2010-10-22 19:05 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-01-20 16:51:31 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Config file (1.59 KB, application/octet-stream)
2008-03-06 16:17 EST, Afom T. Michael
no flags Details
Allow start-if-enabled before fenced in cman script (1.60 KB, patch)
2008-03-07 12:58 EST, Lon Hohberger
no flags Details | Diff

  None (edit)
Description Afom T. Michael 2008-03-06 16:17:18 EST
Description of problem:
A four node (Dell PowerEdge 2950) cluster running RHEL5.1 using GFS1 on CLVM
volumes fails to form qurom when only one 1 node is up and the rest are
shutdown. Each node has 1 vote and quorum device (not part of clvmd) has 3 votes
(see attached config).

Version-Release number of selected component (if applicable):
RHEL5.1
Updating to cman-2.0.73-1.el5_1.5 didn't have any effect.

How reproducible:
All the time.

Steps to Reproduce:
1. Configure a 4-node RHEL5.1 GFS1 cluster using IPMI (ipmilan)
2. Create a quorum disk with 3 votes
3. Create GFS1 volumes on shared storage and add it to fstab on all nodes
4. Reboot one node and shutdown all the other nodes
5. Check status ... if clvmd started

Actual results:

Quorum is online but clvmd didn't start
[root@ora1 sysconfig]# service clvmd status
clvmd is stopped
Some of the messages:
Mar  3 17:58:05 ora1 kernel: dlm: no local IP address has been set
Mar  3 17:58:05 ora1 kernel: dlm: cannot start dlm lowcomms -107
Mar  3 17:58:05 ora1 clvmd: Unable to create lockspace for CLVM: Transport
endpoint is not connected

Expected results:
To be able to have quorum with one node & quorum disk

Additional info:
Once Lon Hohberger changed cman & qdiskd init order, it worked.
Comment 1 Afom T. Michael 2008-03-06 16:17:18 EST
Created attachment 297104 [details]
Config file
Comment 2 Lon Hohberger 2008-03-06 16:36:10 EST
Actually, it's not exactly as described: the quorum forms, but clvmd fails to
set up LVs because a quorum hasn't formed before clvmd comes up, which then
causes GFS volumes to be inaccessible when the 'gfs' init script is started.

We employed a fix on a test cluster which simply involves reversing the start
order of cman and qdiskd.  Because fenced waits up to 5 minutes (by default) for
a quorum to form, qdiskd can actually cause quorum formation while fenced is
waiting.  This of course allows clvmd to start correctly (the first time!),
which, in turn allows gfs file systems to be mounted immediately.

The original reason qdiskd was started after cman was two-fold:
(a) cman was required for qdiskd to run, and 
(b) qdiskd might need clvmd, so it should start after it


In RHEL5, clvmd is started at level 24 while qdiskd is started at 22. 
Therefore, (b) won't occur - and since it is not supported anyway, this is ok.

As of RHEL5.1, qdiskd can start and wait indefinitely for CMAN to come up before
operating in any useful capacity, so the requirement for (a) is negated.

It is my belief that because the cman and qdiskd init scripts are placed in the
same package, we can reverse their ordering safely.
Comment 3 Lon Hohberger 2008-03-06 16:36:58 EST
FWIW, cman starts at level 21 and qdiskd at level 22.
Comment 4 Lon Hohberger 2008-03-07 12:58:08 EST
Created attachment 297229 [details]
Allow start-if-enabled before fenced in cman script

This patches the cman + qdiskd scripts, making qdiskd start if chkconfig'd on
before fenced.

Failure of qdiskd to start (for example, if it's on clustered LVs) is non-fatal
to the cman-init script, so if chkconfig'd on and moved later in the start tree
(for example after clvmd), it will get started a second time if needed.

Subsequent start-after-start of the qdiskd init script no longer prints "QDisk
services already running", so if the start occurs successfully from the cman
script, the qdiskd script does not misbehave or raise alarms.
Comment 8 David Teigland 2008-03-20 17:08:23 EDT
Mar  3 17:58:05 ora1 kernel: dlm: no local IP address has been set
Mar  3 17:58:05 ora1 kernel: dlm: cannot start dlm lowcomms -107
Mar  3 17:58:05 ora1 clvmd: Unable to create lockspace for CLVM: Transport

The above is usually due to dlm_controld not being started, or configfs not
being mounted.
Comment 9 Lon Hohberger 2008-03-31 10:20:03 EDT
I reproduced those messages simply by trying to start clvmd on an inquorate
cluster (after starting cman with the init script, which starts all daemons +
mounts configfs).

* clvmd exits immediately after startup:

  [root@frederick ~]# clvmd -d
  CLVMD[aaee4430]: Mar 31 10:12:43 CLVMD started
  CLVMD[aaee4430]: Mar 31 10:12:43 Connected to CMAN
  CLVMD[aaee4430]: Mar 31 10:12:43 CMAN initialisation complete
  CLVMD[aaee4430]: Mar 31 10:12:43 Can't initialise cluster interface
    Can't initialise cluster interface

  (in dmesg:)
  dlm: no local IP address has been set
  dlm: cannot start dlm lowcomms -107

* if, however, I make the cluster quorate, then inquorate, starting clvmd at
that point produces the following:

  [root@frederick ~]# clvmd -d
  CLVMD[aaee4430]: Mar 31 10:16:13 CLVMD started
  CLVMD[aaee4430]: Mar 31 10:16:13 Connected to CMAN
  CLVMD[aaee4430]: Mar 31 10:16:13 CMAN initialisation complete
  (waits for quorum)

* If at this point, I make the cluster quorate, clvmd wakes up:

  CLVMD[aaee4430]: Mar 31 10:16:55 Cluster ready, doing some more initialisation
  CLVMD[aaee4430]: Mar 31 10:16:55 starting LVM thread
  CLVMD[aaee4430]: Mar 31 10:16:55 clvmd ready for work
  CLVMD[aaee4430]: Mar 31 10:16:55 Using timeout of 60 seconds
  CLVMD[aaee4430]: Mar 31 10:16:55 Got state change message, 
                   re-reading members list
  CLVMD[41401940]: Mar 31 10:16:55 LVM thread function started


So, I don't even think this is an ordering bug.  CLVMD isn't staying up the
first time unless the cluster is quorate.  Mitigating it by flipping the init
script ordering only masks the problem: any time a cluster node boots, if that
cluster node has not been quorate, administrator intervention is required to
bring up clustered LVM volumes after the cluster node becomes a part of the
cluster quorum.
Comment 10 Lon Hohberger 2008-03-31 10:21:08 EDT
Note that I think this init script patch should be included in the near future,
but ultimately, I do not believe it solves this particular bug.
Comment 11 Christine Caulfield 2008-03-31 10:29:35 EDT
clvmd is simply the victim in all this. 

The reason it can't start is that the DLM is not being configured by
dlm_controld, either because it hasn't been (fully) started or it has
encountered an error of some kind.
Comment 18 errata-xmlrpc 2009-01-20 16:51:31 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2009-0189.html

Note You need to log in before you can comment on or make changes to this bug.