Description of problem: in client3_3_lookup_cbk xdata is not passed to the upper xlators on failure. But there could be other xlators (like afr) that expect xdata to be present even on failure. Fix it. Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/15120 (protocol/client: Unserialize xdata even if lookup fails) posted (#2) for review on master by Anuradha Talur (atalur)
REVIEW: http://review.gluster.org/15120 (protocol/client: Unserialize xdata even if lookup fails) posted (#3) for review on master by Anuradha Talur (atalur)
REVIEW: http://review.gluster.org/15178 (afr: set data and metadata readable to child up when no heal needed) posted (#1) for review on master by Anuradha Talur (atalur)
COMMIT: http://review.gluster.org/15120 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 59186114f9545fda529368ee26c3cd3d88a80751 Author: Anuradha Talur <atalur> Date: Tue Aug 9 21:09:11 2016 +0530 protocol/client: Unserialize xdata even if lookup fails Problem: AFR relies on xdata returned by lookup to determine if there are any files that need healing. This info is further used to optimize readdirp. In case of lookups with negative return value, client xlator was sending NULL xdata. Due to absence of xdata, AFR conservatively assumes that there are files that need healing, which is incorrect. Solution: Even in case of unsuccessful lookups, send the xdata received by protocol client so that higher xlators can get the info that they rely on. Change-Id: Id3a1023eb536180888eb2c0b39050000b76f7226 BUG: 1366284 Signed-off-by: Anuradha Talur <atalur> Reviewed-on: http://review.gluster.org/15120 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Poornima G <pgurusid> Tested-by: Poornima G <pgurusid> CentOS-regression: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Ashish Pandey <aspandey> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report. glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html [2] https://www.gluster.org/pipermail/gluster-users/