+++ This bug was initially created as a clone of Bug #1216976 +++ Description of problem: ======================= currently detach-tier executes successfully straight away on a stopped volume. We should not be allowing this. The user could have stopped the volume for some maintanace purpose. In case we want to allow, then let the user use force option just like with a remove-brick Allowing straight away can cause data loss with accidental issue of detach-tier command Version-Release number of selected component (if applicable): ========================================================== [root@yarrow glusterfs]# gluster --version glusterfs 3.7.0alpha0 built on Apr 28 2015 01:37:11 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@yarrow glusterfs]# rpm -qa|grep gluster glusterfs-fuse-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64 glusterfs-libs-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64 glusterfs-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64 glusterfs-cli-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64 glusterfs-server-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64 glusterfs-api-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64 Steps to Reproduce: =================== 1.create a volume and start it and attach tier to it 2.now stop the volume 3.issue a detach tier. this passes straight away Expected results: ================== don't allow detach-tier on a stopped volume. Incase we have to allow, let it be with force option, just as in remove brick Allowing straight away can cause data loss with accidental issue of detach-tier command --- Additional comment from Mohammed Rafi KC on 2015-05-02 09:59:07 EDT --- Fixed as part of the change http://review.gluster.org/#/c/10284/. --- Additional comment from Niels de Vos on 2015-05-15 09:07:42 EDT --- This change should not be in "ON_QA", the patch posted for this bug is only available in the master branch and not in a release yet. Moving back to MODIFIED until there is an beta release for the next GlusterFS version.
[root@rhsqa14-vm1 ~]# gluster v info Volume Name: mars Type: Tier Volume ID: ec4525aa-a190-4598-afce-f48c191aa125 Status: Stopped Number of Bricks: 6 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: 10.70.47.163:/rhs/brick3/m0 Brick2: 10.70.47.165:/rhs/brick3/m0 Cold Tier: Cold Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick3: 10.70.47.165:/rhs/brick1/m0 Brick4: 10.70.47.163:/rhs/brick1/m0 Brick5: 10.70.47.165:/rhs/brick2/m0 Brick6: 10.70.47.163:/rhs/brick2/m0 Options Reconfigured: features.uss: enable features.quota-deem-statfs: on features.inode-quota: on features.quota: on cluster.min-free-disk: 10 performance.readdir-ahead: on [root@rhsqa14-vm1 ~]# [root@rhsqa14-vm1 ~]# gluster v detach-tier mars start volume detach-tier start: failed: Volume mars needs to be started before remove-brick (you can use 'force' or 'commit' to override this behavior) [root@rhsqa14-vm1 ~]#
[root@rhsqa14-vm1 ~]# glusterfs --version glusterfs 3.7.1 built on Jun 9 2015 02:31:54 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. [root@rhsqa14-vm1 ~]# rpm -qa | grep gluster glusterfs-3.7.1-1.el6rhs.x86_64 glusterfs-cli-3.7.1-1.el6rhs.x86_64 glusterfs-libs-3.7.1-1.el6rhs.x86_64 glusterfs-client-xlators-3.7.1-1.el6rhs.x86_64 glusterfs-fuse-3.7.1-1.el6rhs.x86_64 glusterfs-server-3.7.1-1.el6rhs.x86_64 glusterfs-api-3.7.1-1.el6rhs.x86_64 [root@rhsqa14-vm1 ~]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html