Bug 1247125 - nfs-ganesha: volume stop and a subsequent start throws "Stale file handle" error on mount-point
nfs-ganesha: volume stop and a subsequent start throws "Stale file handle" er...
Status: CLOSED WORKSFORME
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: nfs-ganesha (Show other bugs)
3.1
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: Jiffin
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-07-27 08:01 EDT by Saurabh
Modified: 2016-07-15 05:56 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-07-15 05:56:02 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2015-07-27 08:01:32 EDT
Description of problem:
If the volume is already mounted and you try to do a stop and subsequent start of a volume. The mount-point access throws "Stale file handle" error.

Version-Release number of selected component (if applicable):
glusterfs-3.7.1-11.el6rhs.x86_64
nfs-ganesha-2.2.0-5.el6rhs.x86_64

How reproducible:
always

Steps to Reproduce:
1. create a volume of type 6x2, start it
2. configure nfs-ganesha and mount the volume
3. gluster volume stop <volname>
4. gluster volume start <volname>

Actual results:
# ls /mnt
ls: cannot access /mnt: Stale file handle

Expected results:
The mount should be accessible and there should not be any ESTALE issue.

Additional info:

Workaround is to remount the volume.
Comment 3 Jiffin 2016-02-04 06:07:32 EST
I could not reproduce this issue in latest code.
Comment 4 Shashank Raj 2016-07-15 05:56:02 EDT
Verified this bug with latest 3.1.3 build with both v3 and v4

With v3:

1) Create a dist-rep volume 2x2 and enable ganesha on the volume.

[root@dhcp43-208 ~]# gluster vol info v1
 
Volume Name: v1
Type: Distributed-Replicate
Volume ID: a074ba63-ba55-4adb-9c97-7ec37b0c72d5
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.208:/bricks/brick0/b0
Brick2: 10.70.43.184:/bricks/brick0/b0
Brick3: 10.70.42.250:/bricks/brick0/b0
Brick4: 10.70.43.22:/bricks/brick0/b0
Options Reconfigured:
ganesha.enable: on
features.cache-invalidation: off
nfs.disable: on
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
nfs-ganesha: enable

2) Mount the mount on the client:

10.70.40.205:/v1 on /mnt/nfs2 type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.70.40.205,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.70.40.205)

3) Stopped and started the volume:

[root@dhcp43-208 ~]# gluster vol stop v1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: v1: success

[root@dhcp43-208 ~]# gluster vol start v1
volume start: v1: success

4) After start, volume is mounted on the client and ls under it shows the files under it:

[root@dhcp46-206 ~]# df
Filesystem                              1K-blocks      Used  Available Use% Mounted on

10.70.40.205:/v1                         62559232    368640   62190592   1% /mnt/nfs2

[root@dhcp46-206 ~]# ls /mnt/nfs2
file

5) There is no need to remount the volume after stop/start of volume

With v4:

1) Mount the same volume with v4

10.70.40.205:/v1 on /mnt/nfs2 type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.70.46.206,local_lock=none,addr=10.70.40.205)


2) Stop and start the volume:

[root@dhcp43-208 ~]# gluster vol stop v1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: v1: success
[root@dhcp43-208 ~]# gluster vol start v1
volume start: v1: success

3) After start, volume is mounted on the client and ls under it shows the files under it:

[root@dhcp46-206 ~]# df

Filesystem                              1K-blocks      Used  Available Use% Mounted on

10.70.40.205:/v1                         62559232    368640   62190592   1% /mnt/nfs2

[root@dhcp46-206 ~]# ls /mnt/nfs2
file

4) There is no need to remount the volume after stop/start of volume

Conclusion: Earlier reported issue is no longer seen with latest 3.1.3 builds, hence closing this bug as closed-worksforme

Note You need to log in before you can comment on or make changes to this bug.