Description of problem: tier status shows as progressing but there is no rebalance daemon running. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1.create and start tier daemon 2.kill all the bricks in a replica set 3.check tier status and running status of tier daemon Actual results: tier daemon died, but status shows still running Expected results: status should be failed. Additional info:
REVIEW: http://review.gluster.org/11068 (tiering/rebalance: tier daemon stopped with out updating status) posted (#1) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/11068 (tiering/rebalance: tier daemon stopped with out updating status) posted (#2) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/11068 (tiering/rebalance: tier daemon stopped with out updating status) posted (#3) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/11068 (tiering/rebalance: tier daemon stopped with out updating status) posted (#5) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/11068 (tiering/rebalance: tier daemon stopped with out updating status) posted (#6) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/11068 (tiering/rebalance: tier daemon stopped with out updating status) posted (#7) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/11068 (tiering/rebalance: tier daemon stopped with out updating status) posted (#8) for review on master by mohammed rafi kc (rkavunga)
COMMIT: http://review.gluster.org/11068 committed in master by Dan Lambright (dlambrig) ------ commit d3714f252d91f4d1d5df05c4dcc8bc7c2ee75326 Author: Mohammed Rafi KC <rkavunga> Date: Wed Jun 3 17:10:22 2015 +0530 tiering/rebalance: tier daemon stopped with out updating status When a subvol goes down, tier daemon stopped immediately, and the status shows as "Progressing". With this change, with respect to tier xlator, when a subvol goes offline it will update the status as failed. Change-Id: I9f722ed0d35cda8c7fc1a7e75af52222e2d0fdb7 BUG: 1227803 Signed-off-by: Mohammed Rafi KC <rkavunga> Reviewed-on: http://review.gluster.org/11068 Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Dan Lambright <dlambrig> Tested-by: Dan Lambright <dlambrig>
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user