Description of problem: volume status failed with commit error after detach tier start on a tiered volume. Version-Release number of selected component (if applicable): mainline How reproducible: 100% Steps to Reproduce: 1.create a replica 2 volume with two bricks 2.attach hot tier with only one brick 3.do detach start 4. gluster volume status vol Actual results: volume status failed for vol Expected results: volume status shoule be successful Additional info:
REVIEW: http://review.gluster.org/12146 (tier/glusterd: volume status failed after detach start) posted (#1) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/12146 (tier/glusterd: volume status failed after detach start) posted (#2) for review on master by hari gowtham (hari.gowtham005)
REVIEW: http://review.gluster.org/12146 (tier/glusterd: volume status failed after detach start) posted (#3) for review on master by Dan Lambright (dlambrig)
COMMIT: http://review.gluster.org/12146 committed in master by Dan Lambright (dlambrig) ------ commit 51632e1eec3ff88d19867dc8d266068dd7db432a Author: Mohammed Rafi KC <rkavunga> Date: Thu Sep 10 11:52:27 2015 +0530 tier/glusterd: volume status failed after detach start After triggering detach start on a tiered volume fails. This because of brick count was wrongly setting in rebal dictionary. Change-Id: I6a472bf2653a07522416699420161f2fb1746aef BUG: 1261757 Signed-off-by: Mohammed Rafi KC <rkavunga> Reviewed-on: http://review.gluster.org/12146 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Dan Lambright <dlambrig> Tested-by: Dan Lambright <dlambrig>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user