REVIEW: http://review.gluster.org/9309 (gluster/uss: Handle ESTALE in snapview client when NFS server restarted) posted (#1) for review on release-3.6 by Sachin Pandit (spandit)
REVIEW: http://review.gluster.org/9309 (gluster/uss: Handle ESTALE in snapview client when NFS server restarted.) posted (#2) for review on release-3.6 by Sachin Pandit (spandit)
REVIEW: http://review.gluster.org/9309 (gluster/uss: Handle ESTALE in snapview client when NFS server restarted) posted (#3) for review on release-3.6 by Sachin Pandit (spandit)
COMMIT: http://review.gluster.org/9309 committed in release-3.6 by Raghavendra Bhat (raghavendra) ------ commit f47c24d518d1b10bca04a16737dab88bba53a07e Author: Sachin Pandit <spandit> Date: Fri Dec 19 03:54:45 2014 +0530 gluster/uss: Handle ESTALE in snapview client when NFS server restarted When NFS server is restarted inode-context is lost. Nameless lookup will be sent to regular volume. If the gfid is from virtual graph, lookup will fail with ESTALE. We need to send a lookup to snapview server Change-Id: I22920614f0d14cb90b53653fce95b6b70023eba6 BUG: 1175736 Signed-off-by: vmallika <vmallika> Reviewed-on: http://review.gluster.org/9153 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Sachin Pandit <spandit> Reviewed-by: Vijay Bellur <vbellur> Signed-off-by: Sachin Pandit <spandit> Reviewed-on: http://review.gluster.org/9309 Reviewed-by: Raghavendra Bhat <raghavendra>
Description of problem: ======================= After deactivating a snapshot trying to access the remaining activated snapshots from NFS mount gives 'Invalid argument' error . Version-Release number of selected component (if applicable): ============================================================ glusterfs 3.6.1 How reproducible: ================= 3/3 Steps to Reproduce: ================== 1.Create a 2x2 dist-rep volume and start it 2.Fuse and NFS mount the volume and enable USS on the volume 3.Create IO on the volume and take few snapshots 4.Activate all the snapshots 5.cd to .snaps from fuse and nfs mount, all snapshots are listed 6.Deactivate one snapshot gluster snapshot deactivate vol0_snap1 Deactivating snap will make its data inaccessible. Do you want to continue? (y/n) y Snapshot deactivate: vol0_snap1: Snap deactivated successfully 7.cd to .snaps from nfs mount - it fails with Invalid argument error List of snapshots when the snapshots were activated: ==================================================== [root@dhcp-0-97 .snaps]# ll total 192 drwxr-xr-x. 13 root root 378 Nov 19 18:05 vol0_snap1 drwxr-xr-x. 31 root root 16384 Nov 19 18:10 vol0_snap10 drwxr-xr-x. 34 root root 16384 Nov 19 18:11 vol0_snap11 drwxr-xr-x. 34 root root 16384 Nov 19 18:11 vol0_snap12 drwxr-xr-x. 37 root root 16384 Nov 19 18:12 vol0_snap13 drwxr-xr-x. 39 root root 16384 Nov 19 18:13 vol0_snap14 drwxr-xr-x. 40 root root 16384 Nov 19 18:13 vol0_snap15 drwxr-xr-x. 42 root root 16384 Nov 19 18:14 vol0_snap16 drwxr-xr-x. 16 root root 480 Nov 19 18:06 vol0_snap2 drwxr-xr-x. 18 root root 514 Nov 19 18:06 vol0_snap3 drwxr-xr-x. 19 root root 582 Nov 19 18:07 vol0_snap4 drwxr-xr-x. 22 root root 16384 Nov 19 18:07 vol0_snap5 drwxr-xr-x. 24 root root 16384 Nov 19 18:08 vol0_snap6 drwxr-xr-x. 27 root root 16384 Nov 19 18:08 vol0_snap7 drwxr-xr-x. 28 root root 16384 Nov 19 18:09 vol0_snap8 drwxr-xr-x. 30 root root 16384 Nov 19 18:09 vol0_snap9 After deactivating one snapshot, ll on .snaps fails : ==================================================== [root@dhcp-0-97 .snaps]# ll ls: reading directory .: Invalid argument total 0 [root@dhcp-0-97 .snaps]# ll ls: cannot open directory .: Invalid argument [root@dhcp-0-97 .snaps]# ll ls: cannot open directory .: Invalid argument [root@dhcp-0-97 .snaps]# Actual results: =============== After deactivating a snapshot accessing the remaining activated snapshots from NFS mount gives 'Invalid argument' error Expected results: ================ Even after deactivating few snapshots, the remaining activated snapshots should be listed under .snaps
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.2, please reopen this bug report. glusterfs-3.6.2 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should already be or become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. The fix for this bug likely to be included in all future GlusterFS releases i.e. release > 3.6.2. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/5978 [2] http://news.gmane.org/gmane.comp.file-systems.gluster.user [3] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137