+++ This bug was initially created as a clone of Bug #1290125 +++ Include fixes from http://review.gluster.org/#/c/12936/ to the 3.7 branch.
REVIEW: http://review.gluster.org/12947 (Fix arbiter-statfs.t) posted (#1) for review on release-3.7 by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12947 (Fix arbiter-statfs.t) posted (#2) for review on release-3.7 by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/12947 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) ------ commit 60912c650839a512e5b2f4a251100969f830996d Author: Ravishankar N <ravishankar> Date: Fri Dec 11 10:32:52 2015 +0530 Fix arbiter-statfs.t ..and remove it from bad tests list. Backport of http://review.gluster.org/#/c/12936/ Problem: https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12516/consoleFull ++ SETUP_LOOP /d/backends/brick1 ++ '[' 1 '!=' 1 ']' ++ backend=/d/backends/brick1 ++ case ${OSTYPE} in +++ awk -F: '/not in use/{print $1; exit}' +++ vnconfig -l vnconfig: VNDIOCGET: Bad file descriptor ++ vnd= ++ '[' x = x ']' ++ echo 'no more vnd' no more vnd ++ return 1 Fix: TEST the return value of SETUP_LOOP. Also added EXIT_EARLY to the test case because there is no point in continuing the test when setting the bricks fail. Change-Id: Idca269650385765a13be070186dc0b7eb2e5fda1 BUG: 1290658 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/12947 Reviewed-by: Michael Adam <obnox> Tested-by: Gluster Build System <jenkins.com> Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
v3.7.7 contains a fix
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.7, please open a new bug report. glusterfs-3.7.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-February/025292.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user