Description of problem: ======================= when you try to detach a tier in a fail case, say when volume is offline, it talks about remove-brick as below: volume detach-tier start: failed: Volume testme needs to be started before remove-brick (you can use 'force' or 'commit' to override this behavior) Instead it should tell about "detach-tier" Version-Release number of selected component (if applicable): Version-Release number of selected component (if applicable): ========================================================== [root@nag-manual-node1 ~]# gluster --version glusterfs 3.7.3 built on Aug 27 2015 01:23:05 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@nag-manual-node1 ~]# rpm -qa|grep gluster glusterfs-libs-3.7.3-0.82.git6c4096f.el6.x86_64 glusterfs-fuse-3.7.3-0.82.git6c4096f.el6.x86_64 glusterfs-server-3.7.3-0.82.git6c4096f.el6.x86_64 glusterfs-3.7.3-0.82.git6c4096f.el6.x86_64 glusterfs-api-3.7.3-0.82.git6c4096f.el6.x86_64 glusterfs-cli-3.7.3-0.82.git6c4096f.el6.x86_64 python-gluster-3.7.3-0.82.git6c4096f.el6.noarch Steps to Reproduce: ===================== 1.offline a tier volume 2.issue a detach-tier start, it throws wrong error telling about remove-brick instead of detach-tier
REVIEW: http://review.gluster.org/12190 (Tiering:Changing error message as detach-tier instead of "remove-brick") posted (#1) for review on release-3.7 by hari gowtham (hari.gowtham005)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report. glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report. glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user