Bug 1463512 - USS: stale snap entries are seen when activation/deactivation performed during one of the glusterd's unavailability
USS: stale snap entries are seen when activation/deactivation performed durin...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: snapshot (Show other bugs)
3.11
All All
unspecified Severity medium
: ---
: ---
Assigned To: Mohammed Rafi KC
: Triaged
Depends On: 1448150
Blocks: 1165648
  Show dependency treegraph
 
Reported: 2017-06-21 03:33 EDT by Mohammed Rafi KC
Modified: 2017-08-12 09:07 EDT (History)
1 user (show)

See Also:
Fixed In Version: glusterfs-3.11.2
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1448150
Environment:
Last Closed: 2017-08-12 09:07:04 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Mohammed Rafi KC 2017-06-21 03:33:20 EDT
+++ This bug was initially created as a clone of Bug #1448150 +++

Description of problem:

stale snap entries are seen in uss path when activation/deactivation performed during one of the glusterd's unavailability. Even if glusterd brought back to normal state, those stale entries are not synced.

Version-Release number of selected component (if applicable):


How reproducible:

always

Steps to Reproduce:
1.create a snapshot and activate it
2.enable uss
3.kill one glusterd
4. deactivate snapshot from other glusterd
5. bring back the dead glusterd.
6. check the snapshot entry in the uss path (this mount should be through the dead glusterd)

Actual results:

snapshot entry is listed

Expected results:

snapshot entry should not list, becaus it is deactivated

Additional info:

--- Additional comment from Worker Ant on 2017-05-04 11:57:34 EDT ---

REVIEW: https://review.gluster.org/17178 (snapview-server : Refresh the snapshot list during each reconnect) posted (#1) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

--- Additional comment from Worker Ant on 2017-05-05 03:07:59 EDT ---

REVIEW: https://review.gluster.org/17178 (snapview-server : Refresh the snapshot list during each reconnect) posted (#2) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

--- Additional comment from Worker Ant on 2017-05-08 01:49:52 EDT ---

COMMIT: https://review.gluster.org/17178 committed in master by Raghavendra Bhat (raghavendra@redhat.com) 
------
commit 21115ae8b80c1ae0afe8427423ca5ecde40f0027
Author: Mohammed Rafi KC <rkavunga@redhat.com>
Date:   Thu May 4 20:56:43 2017 +0530

    snapview-server : Refresh the snapshot list during each reconnect
    
    Currently we are refreshing the snapshot list either when there is
    a request from glusterd or the very first initialization. But if
    anything changed after when glusterd is down then there is no
    mechanism to refresh the snashot dentries.
    
    This patch will refresh snapshot list during each reconnect
    
    Change-Id: I3ed655572d777f60d57dd479d190f75553591267
    BUG: 1448150
    Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
    Reviewed-on: https://review.gluster.org/17178
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Amar Tumballi <amarts@redhat.com>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
    Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Comment 1 Worker Ant 2017-06-21 03:33:55 EDT
REVIEW: https://review.gluster.org/17585 (snapview-server : Refresh the snapshot list during each reconnect) posted (#1) for review on release-3.11 by mohammed rafi  kc (rkavunga@redhat.com)
Comment 2 Worker Ant 2017-07-03 08:52:49 EDT
COMMIT: https://review.gluster.org/17585 committed in release-3.11 by Shyamsundar Ranganathan (srangana@redhat.com) 
------
commit 78242268318ad85b8016cb012ed3de605d6e4b8c
Author: Mohammed Rafi KC <rkavunga@redhat.com>
Date:   Thu May 4 20:56:43 2017 +0530

    snapview-server : Refresh the snapshot list during each reconnect
    
    Currently we are refreshing the snapshot list either when there is
    a request from glusterd or the very first initialization. But if
    anything changed after when glusterd is down then there is no
    mechanism to refresh the snashot dentries.
    
    This patch will refresh snapshot list during each reconnect
    
    backport of>
    
    >Change-Id: I3ed655572d777f60d57dd479d190f75553591267
    >BUG: 1448150
    >Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
    >Reviewed-on: https://review.gluster.org/17178
    >Smoke: Gluster Build System <jenkins@build.gluster.org>
    >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    >Reviewed-by: Amar Tumballi <amarts@redhat.com>
    >CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    >Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
    >Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
    
    Change-Id: I3ed655572d777f60d57dd479d190f75553591267
    BUG: 1463512
    Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
    Reviewed-on: https://review.gluster.org/17585
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
Comment 3 Shyamsundar 2017-08-12 09:07:04 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.2, please open a new bug report.

glusterfs-3.11.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-July/031908.html
[2] https://www.gluster.org/pipermail/gluster-users/

Note You need to log in before you can comment on or make changes to this bug.