Red Hat Bugzilla – Bug 166543
NFS Stale locks when NFS export moves to another node
Last modified: 2007-11-30 17:07:19 EST
Description of problem:
I have setup a 4 node cluster with gfs (AS4, Cluster Suite4, GFS 6.1). I have
3 gfs partitions that I am trying to export via nfs. This has been done and I
can mount them on another redhat AS3 box. When I move the nfs shares to other
nodes to simulate a failover I get stale nfs handle errors on the client
Upon investigation I found a article for cluster suite 3, about setting FS for
nfs shares as these need to be the same for each node. This seems to have
been removed (the ability to set options, through the gui), but the
documentation states that clurmtabd should handle this.
I have run cat /var/lib/nfs/rmtab on all the nodes when it is working (before
the move and after, when I am getting the stale nfs handles) and the numbers
are not similiar or consistant across the nodes. NOTE- I am making assumption
the numbers in these files are the FS numbers.
I have checked the major minor numbers for all the base devices (lvm
partitions) and they are the same on all the machines!
here is an example output pre move
forall cat /var/lib/nfs/rmtab \; echo
[samad@rhnsat test]$ forall cat /var/lib/nfs/rmtab \; echo
Version-Release number of selected component (if applicable):
Steps to Reproduce:
2. mount nfs exports
3. move nfs export to new node
I get nfs stale locks on the client machine
the nfs shafre should seemlessly failover to new node
Created attachment 117989 [details]
cluster conf file
Created attachment 117990 [details]
rmtab file from 4 nodes
this shows the rmtab file from all the nodes
Assigning to NFS maintainer, but staying on CC list for now.
We think this is a dup of the NFS Failover defect so are going to close it as
such and link it to that one- BZ 132823.
*** This bug has been marked as a duplicate of 132823 ***
bug #132823 is protected. Could it be opened to the public? Otherwise it would
be good to keep this one open to have something to track.