+++ This bug was initially created as a clone of Bug #1355604 +++ Description of problem: Fixing some of the afr bugs reported by coverity that was run on downstream code (RHGS 3.1.3). The entire report of the run is attached herewith.
REVIEW: http://review.gluster.org/15018 (afr: some coverity fixes) posted (#1) for review on release-3.8 by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/15018 committed in release-3.8 by Pranith Kumar Karampuri (pkarampu) ------ commit 823eb274a3c4226aea44f6feb955a5df04aae190 Author: Ravishankar N <ravishankar> Date: Tue Jul 12 10:07:48 2016 +0530 afr: some coverity fixes Note: This is a backport of http://review.gluster.org/14895. It contains: i) fixes that prevent deadlocks (afr-common.c). ii) fixes over-writing op-errno=ENOMEM with possible other values (afr-inode-read.c). iii) prevents doing further operations with a NULL dictionary if allocation fails (afr-self-heal-data.c). iv) prevents falsely marking a sink as healed if metadata heal fails midway(afr-self-heal-metadata.c). v) other minor fixes. Considering the above are not trivial fixes, the patch is a good candidate for merging in 3.8 branch. Thanks to Krutika for a cleaner way to track inode refs in afr_set_split_brain_choice(). Change-Id: I2d968d05b815ad764b7e3f8aa9ad95a792b3c1df BUG: 1360556 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/15018 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Krutika Dhananjay <kdhananj> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.2, please open a new bug report. glusterfs-3.8.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/announce/2016-August/000058.html [2] https://www.gluster.org/pipermail/gluster-users/