REVIEW: http://review.gluster.org/9310 (gluster/uss: Handle notify in snapview-client) posted (#1) for review on release-3.6 by Sachin Pandit (spandit)
REVIEW: http://review.gluster.org/9310 (gluster/uss: Handle notify in snapview-client.) posted (#2) for review on release-3.6 by Sachin Pandit (spandit)
REVIEW: http://review.gluster.org/9310 (gluster/uss: Handle notify in snapview-client) posted (#3) for review on release-3.6 by Sachin Pandit (spandit)
COMMIT: http://review.gluster.org/9310 committed in release-3.6 by Raghavendra Bhat (raghavendra) ------ commit 8df622789ff991eba1ea01c7f8aa50ac6e507b31 Author: vmallika <vmallika> Date: Thu Nov 27 18:38:59 2014 +0530 gluster/uss: Handle notify in snapview-client As there are two subvolumes in snapview-client, there is a possibility that the regular subvolume is still down and snapd subvolume come up first. So if we don't handle this situation CHILD_UP event will be propagated upwards to fuse when regular subvolume is still down. This can cause data unavailable for the application Change-Id: I9e5166ed22c2cf637c15db0457c2b57ca044078e BUG: 1175738 Signed-off-by: vmallika <vmallika> Reviewed-on: http://review.gluster.org/9205 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur> Signed-off-by: Sachin Pandit <spandit> Reviewed-on: http://review.gluster.org/9310 Reviewed-by: Raghavendra Bhat <raghavendra>
Description of problem: ======================= The data which is created from NFS client disappear for a brief when USS is enabled/disabled. [root@wingo vol0]# ls etc [root@wingo vol0]# ls etc etc1 [root@wingo vol0]# ls etc [root@wingo vol0]# ls etc etc1 [root@wingo vol0]# ls etc etc1 [root@wingo vol0]# ls etc [root@wingo vol0]# ls etc etc1 [root@wingo vol0]# ls etc etc1 In the above output directory etc is created from the fuse mount and etc1 is created from the nfs mount. ls is performed from the fuse mount. And is happening whenever tried to enable/disable uss from server Version-Release number of selected component (if applicable): ============================================================== glusterfs-3.6.1 How reproducible: ================= Was able to reproduce multiple times Steps to Reproduce: =================== 1. Create 4 node cluster 2. Create and start 2x2 volume 3. Mount the volume (Fuse and NFS) on /mnt/vol0 and /mnt/nvol0 respectively 4. From Fuse cp -rf /etc /mnt/vol0/ 5. From NFS cp -rf /etc /mnt/nvol0/etc1 6. From server do gluster volume set vol0 uss on/off in a loop 7. While uss enable/disable is in progress do ls from fuse mount on /mnt/vol0/ Actual results: ================ Some time it shows the data which is created from nfs and some times it doesn't show Expected results: ================= Data was created before performing enable/disable, it should show always
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.2, please reopen this bug report. glusterfs-3.6.2 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should already be or become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. The fix for this bug likely to be included in all future GlusterFS releases i.e. release > 3.6.2. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/5978 [2] http://news.gmane.org/gmane.comp.file-systems.gluster.user [3] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137