+++ This bug was initially created as a clone of Bug #1281230 +++ Description of problem: Currently even if a single replica pair goes down, there will be fresh lookups for all files and directories thought there is no layout changes. Hence DHT must avoid fresh lookups when bricks go down. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Create a 2x2 dist-rep volume, mount the volume and create few directories. 2. Bring one the replica pair down. 3. Perform lookup on the directories Actual results: Fresh lookups on all the directories Expected results: Fresh lookup must be avoided, and read xattr from the other pair
REVIEW: http://review.gluster.org/12767 (afr: replica pair going offline does not require CHILD_MODIFIED event) posted (#1) for review on release-3.7 by Sakshi Bansal
REVIEW: http://review.gluster.org/12767 (afr: replica pair going offline does not require CHILD_MODIFIED event) posted (#2) for review on release-3.7 by Sakshi Bansal
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
REVIEW: http://review.gluster.org/12767 (afr: replica pair going offline does not require CHILD_MODIFIED event) posted (#3) for review on release-3.7 by Sakshi Bansal
COMMIT: http://review.gluster.org/12767 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) ------ commit fa78b755e9c58328c1df4ef1bfeb752d47534a4a Author: Sakshi Bansal <sabansal> Date: Thu Nov 12 12:28:53 2015 +0530 afr: replica pair going offline does not require CHILD_MODIFIED event As a part of CHILD_MODIFIED event DHT forgets the current layout and performs fresh lookup. However this is not required when a replica pair goes offline as the xattrs can be read from other replica pairs. Hence setting different event to handle replica pair going down. > Backport of http://review.gluster.org/#/c/12573/ > Change-Id: I5ede2a6398e63f34f89f9d3c9bc30598974402e3 > BUG: 1281230 > Signed-off-by: Sakshi Bansal <sabansal> > Reviewed-on: http://review.gluster.org/12573 > Reviewed-by: Ravishankar N <ravishankar> > Reviewed-by: Susant Palai <spalai> > Tested-by: NetBSD Build System <jenkins.org> > Tested-by: Gluster Build System <jenkins.com> > Reviewed-by: Jeff Darcy <jdarcy> Change-Id: Ida30240d1ad8b8730af7ab50b129dfb05264fdf9 BUG: 1283972 Signed-off-by: Sakshi Bansal <sabansal> Reviewed-on: http://review.gluster.org/12767 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report. glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user