This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 127828 - only one mount possible at a time, node names implicated
only one mount possible at a time, node names implicated
Status: CLOSED ERRATA
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: gfs (Show other bugs)
3
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: michael conrad tadpol tilstra
Cluster QE
:
Depends On:
Blocks: 137219
  Show dependency treegraph
 
Reported: 2004-07-14 09:04 EDT by Richard Keech
Modified: 2013-07-03 09:05 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-09-28 11:00:00 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Richard Keech 2004-07-14 09:04:00 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040510

Description of problem:
Only one mount at a time is possible, ie
the first mount, on any node, succeeds.

A second mount attempt, of the same file system on 
any different node, blocks.

Umount'ing the file system on the first node allows
the mount attempt on the second node to return
successfully.

Node names of the nodes are initially set to
mebsuta-1, mebsuta-2, mebsuta-3, mebsuta-4

When node names of all nodes are changed to
mebsuta1, mebsuta2, mebsuta3, mebsuta4, and
all references in configs are changed accordingly,
then the mount problem disappears, ie correct cluster
operation is achieved by changing only the host
names (and references to host names) and re-starting.



Version-Release number of selected component (if applicable):
GFS-6.0.0-7

How reproducible:
Always

Steps to Reproduce:
Situation 

I have a four-node cluster connected to an IBM SAN using
QLogic FC adapters.  I'm using GFS 6.0.0-7 and kernel 
2.4.21-15.0.3.EL.
Fencing is intended to be using Cisco MDS switches, but for
now is simply set to fence_manual just to get things going.

When I try to run a single lock manager configuration
I _am_ able to create and mount GFS file systems on any one node
at a time.  However I am unable to mount the filesystem
on more than one node at a time. 

All the gfs-related modules are correctly loaded
on all nodes.

The gfs-related service scripts all start successfully
on all nodes.  lock_gulmd is running on all nodes.
gulm_tool nodelist shows one node as master, and others
as clients.

The data and ccs pools are created and visible to all nodes
as shown by pool_info.

I have created /etc/sysconfig/gfs on all nodes as follows:

        POOLS="pool0  testgfs_cca"
        CCS_ARCHIVE="/dev/pool/testgfs_cca"

    

Actual Results:  only one node can mount a file system

Expected Results:  all nodes should mount file system.

Additional info:

It's not clear if the problem was fixed by the removal
of the hyphen in the node names, or by the shortening
of the names to eight characters.
Comment 1 michael conrad tadpol tilstra 2004-07-19 12:08:19 EDT
yeah, the first 8 characters of a node's name *have* to be unique
right now when using gulm with gfs.

making sure the first 8 chars are unique is the current work around. 
A real fix is none trival, but something I have planned on doing.
Comment 2 Kevin Sonney 2004-07-20 09:59:34 EDT
I've had another customer encounter this. If the fix is still pending,
can we at least update the docuementation and get a KB entry
describing this and the work-around?
Comment 3 michael conrad tadpol tilstra 2004-07-28 10:32:03 EDT
A fix is in CVS HEAD (could be 6.1) There is a fix ready for 6.0.  BUt
both of these made some changes to the recovery code, and really need
to get hit hard by QA first.

As for the documentation or KB (what's KB?) is that soemthing I do, I
do I re-assign this bug or just hope someone else does something?
Comment 4 Kevin Sonney 2004-07-28 10:50:13 EDT
KB == Knowledge Base. Contact our support staff, and they can help you
out with that. I thinkit's something we can self-serve, but Support
controlls access to it, and it's best to submit entries to them for
inclusion at this time, AFAIK. 
Comment 7 Derek Anderson 2004-09-28 11:00:00 EDT
Verified with GFS-6.0.0-15 vs. GFS-6.0.0-7.  Used hostnames
linklink-13, linklink-14, linklink-15.  Mounted 14 filesystems on all
three nodes with GFS 6.0.0-15.
Comment 8 David Lawrence 2004-09-29 16:00:43 EDT
An errata has been issued which should help the problem 
described in this bug report. This report is therefore being 
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files, 
please follow the link below. You may reopen this bug report 
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2004-490.html

Note You need to log in before you can comment on or make changes to this bug.