Bug 1283972

Summary: dht must avoid fresh lookups when a single replica pair goes offline
Product: [Community] GlusterFS Reporter: Sakshi <sabansal>
Component: distributeAssignee: Sakshi <sabansal>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.7.5CC: bugs, smohan
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.12 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1281230
: 1330901 (view as bug list) Environment:
Last Closed: 2016-06-28 12:13:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1281230    
Bug Blocks: 1330901    

Description Sakshi 2015-11-20 12:25:11 UTC
+++ This bug was initially created as a clone of Bug #1281230 +++

Description of problem:
Currently even if a single replica pair goes down, there will be fresh lookups for all files and directories thought there is no layout changes. Hence DHT must avoid fresh lookups when bricks go down.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Create a 2x2 dist-rep volume, mount the volume and create few directories.
2. Bring one the replica pair down.
3. Perform lookup on the directories

Actual results:
Fresh lookups on all the directories

Expected results:
Fresh lookup must be avoided, and read xattr from the other pair

Comment 1 Vijay Bellur 2015-11-26 17:31:17 UTC
REVIEW: http://review.gluster.org/12767 (afr: replica pair going offline does not require CHILD_MODIFIED event) posted (#1) for review on release-3.7 by Sakshi Bansal

Comment 2 Vijay Bellur 2016-03-07 04:47:57 UTC
REVIEW: http://review.gluster.org/12767 (afr: replica pair going offline does not require CHILD_MODIFIED event) posted (#2) for review on release-3.7 by Sakshi Bansal

Comment 3 Mike McCune 2016-03-28 23:31:34 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 4 Vijay Bellur 2016-04-07 06:33:14 UTC
REVIEW: http://review.gluster.org/12767 (afr: replica pair going offline does not require CHILD_MODIFIED event) posted (#3) for review on release-3.7 by Sakshi Bansal

Comment 5 Vijay Bellur 2016-04-27 07:52:28 UTC
COMMIT: http://review.gluster.org/12767 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) 
------
commit fa78b755e9c58328c1df4ef1bfeb752d47534a4a
Author: Sakshi Bansal <sabansal>
Date:   Thu Nov 12 12:28:53 2015 +0530

    afr: replica pair going offline does not require CHILD_MODIFIED event
    
    As a part of CHILD_MODIFIED event DHT forgets the current layout and
    performs fresh lookup. However this is not required when a replica pair
    goes offline as the xattrs can be read from other replica pairs. Hence
    setting different event to handle replica pair going down.
    
    > Backport of http://review.gluster.org/#/c/12573/
    
    > Change-Id: I5ede2a6398e63f34f89f9d3c9bc30598974402e3
    > BUG: 1281230
    > Signed-off-by: Sakshi Bansal <sabansal>
    > Reviewed-on: http://review.gluster.org/12573
    > Reviewed-by: Ravishankar N <ravishankar>
    > Reviewed-by: Susant Palai <spalai>
    > Tested-by: NetBSD Build System <jenkins.org>
    > Tested-by: Gluster Build System <jenkins.com>
    > Reviewed-by: Jeff Darcy <jdarcy>
    
    Change-Id: Ida30240d1ad8b8730af7ab50b129dfb05264fdf9
    BUG: 1283972
    Signed-off-by: Sakshi Bansal <sabansal>
    Reviewed-on: http://review.gluster.org/12767
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>

Comment 6 Kaushal 2016-06-28 12:13:22 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report.

glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user