Bug 1175738 - [USS]: data unavailability for a period of time when USS is enabled/disabled
Summary: [USS]: data unavailability for a period of time when USS is enabled/disabled
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: 3.6.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Sachin Pandit
QA Contact:
URL:
Whiteboard: USS
Depends On: 1168643
Blocks: glusterfs-3.6.2
TreeView+ depends on / blocked
 
Reported: 2014-12-18 13:51 UTC by Vijaikumar Mallikarjuna
Modified: 2016-05-11 22:47 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.6.2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1168643
Environment:
Last Closed: 2015-02-11 09:11:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Anand Avati 2014-12-19 07:20:33 UTC
REVIEW: http://review.gluster.org/9310 (gluster/uss: Handle notify in snapview-client) posted (#1) for review on release-3.6 by Sachin Pandit (spandit)

Comment 2 Anand Avati 2014-12-19 08:59:34 UTC
REVIEW: http://review.gluster.org/9310 (gluster/uss: Handle notify in snapview-client.) posted (#2) for review on release-3.6 by Sachin Pandit (spandit)

Comment 3 Anand Avati 2014-12-24 07:17:20 UTC
REVIEW: http://review.gluster.org/9310 (gluster/uss: Handle notify in snapview-client) posted (#3) for review on release-3.6 by Sachin Pandit (spandit)

Comment 4 Anand Avati 2014-12-24 15:03:33 UTC
COMMIT: http://review.gluster.org/9310 committed in release-3.6 by Raghavendra Bhat (raghavendra) 
------
commit 8df622789ff991eba1ea01c7f8aa50ac6e507b31
Author: vmallika <vmallika>
Date:   Thu Nov 27 18:38:59 2014 +0530

    gluster/uss: Handle notify in snapview-client
    
    As there are two subvolumes in snapview-client, there is
    a possibility that the regular subvolume is still down and
    snapd subvolume come up first. So if we don't handle this situation
    CHILD_UP event will be propagated upwards to fuse when regular subvolume
    is still down. This can cause data unavailable for the application
    
    Change-Id: I9e5166ed22c2cf637c15db0457c2b57ca044078e
    BUG: 1175738
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/9205
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>
    Signed-off-by: Sachin Pandit <spandit>
    Reviewed-on: http://review.gluster.org/9310
    Reviewed-by: Raghavendra Bhat <raghavendra>

Comment 5 Raghavendra Bhat 2015-01-06 10:43:39 UTC
Description of problem:
=======================

The data which is created from NFS client disappear for a brief when USS is enabled/disabled.

[root@wingo vol0]# ls
etc
[root@wingo vol0]# ls
etc  etc1
[root@wingo vol0]# ls
etc
[root@wingo vol0]# ls
etc  etc1
[root@wingo vol0]# ls
etc  etc1
[root@wingo vol0]# ls
etc
[root@wingo vol0]# ls
etc  etc1
[root@wingo vol0]# ls
etc  etc1

In the above output directory etc is created from the fuse mount and etc1 is created from the nfs mount. ls is performed from the fuse mount. And is happening whenever tried to enable/disable uss from server

Version-Release number of selected component (if applicable):
==============================================================

glusterfs-3.6.1


How reproducible:
=================

Was able to reproduce multiple times


Steps to Reproduce:
===================
1. Create 4 node cluster 
2. Create and start 2x2 volume
3. Mount the volume (Fuse and NFS) on /mnt/vol0 and /mnt/nvol0 respectively
4. From Fuse cp -rf /etc /mnt/vol0/
5. From NFS cp -rf /etc /mnt/nvol0/etc1
6. From server do gluster volume set vol0 uss on/off in a loop
7. While uss enable/disable is in progress do ls from fuse mount on /mnt/vol0/

Actual results:
================

Some time it shows the data which is created from nfs and some times it doesn't show


Expected results:
=================

Data was created before performing enable/disable, it should show always

Comment 6 Raghavendra Bhat 2015-02-11 09:11:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.2, please reopen this bug report.

glusterfs-3.6.2 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should already be or become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

The fix for this bug likely to be included in all future GlusterFS releases i.e. release > 3.6.2.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/5978
[2] http://news.gmane.org/gmane.comp.file-systems.gluster.user
[3] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137


Note You need to log in before you can comment on or make changes to this bug.