Description of problem: ======================= When we issue a detacht-tier start or commit on a non-tier volume, it throws the right message by saying it is not a tier volume as below: [root@tettnang glusterfs]# gluster v detach-tier vol2 commit volume detach-tier commit: failed: volume vol2 is not a tier volume [root@tettnang glusterfs]# gluster v detach-tier vol2 start volume detach-tier start: failed: volume vol2 is not a tier volume But, if we issue a status of detach-tier, it instead of saying it is not a tiered volume, it says the detach-tier has not yet been started. This is a bit improper performance.readdir-ahead: on [root@tettnang glusterfs]# gluster v info vol2 Volume Name: vol2 Type: Distributed-Replicate Volume ID: d0c054d0-a72c-4f66-adb0-e67a04a6267e Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: yarrow:/rhs/brick1/vol2 Brick2: zod:/rhs/brick1/vol2 Brick3: yarrow:/rhs/brick2/vol2 Brick4: zod:/rhs/brick2/vol2 Options Reconfigured: performance.readdir-ahead: on [root@tettnang glusterfs]# gluster v detach-tier vol2 status volume detach-tier status: failed: Detach-tier not started. Version-Release number of selected component (if applicable): ============================================================== [root@tettnang glusterfs]# gluster --version glusterfs 3.7.1 built on Jun 23 2015 22:08:15 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@tettnang glusterfs]# rpm -qa|grep gluster glusterfs-api-3.7.1-5.el7rhgs.x86_64 glusterfs-libs-3.7.1-5.el7rhgs.x86_64 glusterfs-rdma-3.7.1-5.el7rhgs.x86_64 glusterfs-3.7.1-5.el7rhgs.x86_64 glusterfs-cli-3.7.1-5.el7rhgs.x86_64 glusterfs-debuginfo-3.7.1-5.el7rhgs.x86_64 glusterfs-client-xlators-3.7.1-5.el7rhgs.x86_64 glusterfs-server-3.7.1-5.el7rhgs.x86_64 glusterfs-geo-replication-3.7.1-5.el7rhgs.x86_64 glusterfs-fuse-3.7.1-5.el7rhgs.x86_64 [root@tettnang glusterfs]# How reproducible: ================== easily and everytime Steps to Reproduce: ================== 1.create a regular non-tier volume 2.issue "gluster v detach-tier <volname> status" command Actual results: ================ it says detach-tier has not been started Expected results: ================= should say, it is not a tiered volume
It fails when we issue a "gluster v tier <vname> detach status" as below [root@zod ~]# gluster v tier ctr_set detach status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- Fix works in below case: [root@zod ~]# [root@zod ~]# gluster v detach-tier ctr_set status volume detach-tier status: failed: volume ctr_set is not a tier volume. Tier command failed Hence moving it to failed_qa
I find that the Bug 1236020 throws the correct warning. [root@hgowtham-lap glusterfs]# gluster v detach-tier v2 status volume detach-tier status: failed: volume v2 is not a tier volume. Tier command failed I got it as I have mentioned above following your steps for reproduction. need suggestion as what to do on this bug.
Hi Hari, that's what I have mentioned in my validation, re-iterating the same: It WORKS when we issue "gluster v detach-tier <vname> status" But it also has to work when we issue "gluster v tier <vname> detach status". This is the newer CLI and we may be sticking to this going forward
Tested and verified this bug on the build glusterfs-3.7.5-9.el7rhgs.x86_64 Had a tiered volume and detached it with the gluster volume detach-tier command. After the volume became a regular volume, issued gluster volume detach-tier status/start/stop/commit as well as gluster volume tier <volname> detach status/start/stop/commit All the outputs were as expected- 'volume <volname> is not a tiered volume' Pasted below are the logs. Moving this bug to verified in 3.1.2 [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# gluster peer status Number of Peers: 3 Hostname: 10.70.37.210 Uuid: c9541f69-4078-4683-87ff-a6add25a4b47 State: Peer in Cluster (Connected) Hostname: 10.70.37.203 Uuid: e7a0436d-53c9-4e32-8342-8e92a8cca24e State: Peer in Cluster (Connected) Hostname: 10.70.37.141 Uuid: 374a4941-f16d-412f-b7ac-1ed50a534003 State: Peer in Cluster (Connected) [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# gluster v list gluster_shared_storage nash ozone testvol tmp_vol [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# rpm -qa | grep gluster nfs-ganesha-gluster-2.2.0-11.el7rhgs.x86_64 glusterfs-cli-3.7.5-9.el7rhgs.x86_64 glusterfs-3.7.5-9.el7rhgs.x86_64 glusterfs-api-3.7.5-9.el7rhgs.x86_64 glusterfs-ganesha-3.7.5-9.el7rhgs.x86_64 glusterfs-libs-3.7.5-9.el7rhgs.x86_64 glusterfs-fuse-3.7.5-9.el7rhgs.x86_64 glusterfs-client-xlators-3.7.5-9.el7rhgs.x86_64 glusterfs-server-3.7.5-9.el7rhgs.x86_64 [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# gluster v info tmp_vol Volume Name: tmp_vol Type: Distribute Volume ID: c933815f-8767-4d8d-9870-7135ae0797bb Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 10.70.37.55:/rhs/tmp_brick Options Reconfigured: ganesha.enable: off features.cache-invalidation: on nfs.disable: on performance.readdir-ahead: on nfs-ganesha: disable cluster.enable-shared-storage: enable [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# gluster v detach-tier Usage: volume detach-tier <VOLNAME> <start|stop|status|commit|force> Tier command failed [root@dhcp37-55 ~]# gluster v detach-tier tmp_vol start volume detach-tier start: failed: volume tmp_vol is not a tier volume Tier command failed [root@dhcp37-55 ~]# gluster v detach-tier tmp_vol stop volume tier detach stop: failed: Volume tmp_vol is not a distribute volume or contains only 1 brick. Not performing rebalance Tier command failed [root@dhcp37-55 ~]# gluster v list gluster_shared_storage nash ozone testvol tmp_vol [root@dhcp37-55 ~]# gluster v info ozone Volume Name: ozone Type: Tier Volume ID: a0c3186e-09a0-4739-8f6f-5338f14c8f35 Status: Started Number of Bricks: 8 Transport-type: tcp Hot Tier : Hot Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick1: 10.70.37.141:/rhs/thinbrick2/ozone Brick2: 10.70.37.203:/rhs/thinbrick2/ozone Brick3: 10.70.37.141:/rhs/thinbrick1/ozone Brick4: 10.70.37.203:/rhs/thinbrick1/ozone Cold Tier: Cold Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick5: 10.70.37.55:/rhs/thinbrick1/ozone Brick6: 10.70.37.210:/rhs/thinbrick1/ozone Brick7: 10.70.37.55:/rhs/thinbrick2/ozone Brick8: 10.70.37.210:/rhs/thinbrick2/ozone Options Reconfigured: cluster.write-freq-threshold: 5 features.record-counters: on cluster.tier-mode: test features.ctr-enabled: on nfs.disable: off performance.readdir-ahead: on ganesha.enable: off nfs-ganesha: disable cluster.enable-shared-storage: enable [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# gluster v detach-tier Usage: volume detach-tier <VOLNAME> <start|stop|status|commit|force> Tier command failed [root@dhcp37-55 ~]# gluster v detach-tier ozone stop volume tier detach stop: failed: Detach-tier not started Tier command failed [root@dhcp37-55 ~]# gluster v detach-tier ozone status volume tier detach status: failed: Detach-tier not started Tier command failed [root@dhcp37-55 ~]# gluster v detach-tier ozone commit Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit: failed: Brick's in Hot tier is not decommissioned yet. Use gluster volume detach-tier <VOLNAME> <start | commit | force> command instead Tier command failed [root@dhcp37-55 ~]# gluster v detach-tier ozone start volume detach-tier start: success ID: d7ad8a8d-4287-4d36-bf0d-d4606424671e [root@dhcp37-55 ~]# gluster v detach-tier ozone status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- 10.70.37.203 0 0Bytes 0 0 0 completed 0.00 10.70.37.141 0 0Bytes 0 0 0 completed 0.00 [root@dhcp37-55 ~]# gluster v detach-tier ozone commit Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit: success Check the detached bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# gluster v info ozone Volume Name: ozone Type: Distributed-Replicate Volume ID: a0c3186e-09a0-4739-8f6f-5338f14c8f35 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.37.55:/rhs/thinbrick1/ozone Brick2: 10.70.37.210:/rhs/thinbrick1/ozone Brick3: 10.70.37.55:/rhs/thinbrick2/ozone Brick4: 10.70.37.210:/rhs/thinbrick2/ozone Options Reconfigured: cluster.write-freq-threshold: 5 features.record-counters: on nfs.disable: off performance.readdir-ahead: on ganesha.enable: off nfs-ganesha: disable cluster.enable-shared-storage: enable [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# gluster v status ozone Status of volume: ozone Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.37.55:/rhs/thinbrick1/ozone 49372 0 Y 1427 Brick 10.70.37.210:/rhs/thinbrick1/ozone 49370 0 Y 32041 Brick 10.70.37.55:/rhs/thinbrick2/ozone 49373 0 Y 1436 Brick 10.70.37.210:/rhs/thinbrick2/ozone 49371 0 Y 32052 NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 12738 NFS Server on 10.70.37.210 2049 0 Y 24444 Self-heal Daemon on 10.70.37.210 N/A N/A Y 24452 NFS Server on 10.70.37.141 2049 0 Y 26199 Self-heal Daemon on 10.70.37.141 N/A N/A Y 26207 NFS Server on 10.70.37.203 2049 0 Y 32431 Self-heal Daemon on 10.70.37.203 N/A N/A Y 32439 Task Status of Volume ozone ------------------------------------------------------------------------------ There are no active volume tasks [root@dhcp37-55 ~]# gluster v detach-tier ozone status volume tier detach status: failed: volume ozone is not a tier volume. Tier command failed [root@dhcp37-55 ~]# gluster v detach-tier ozone start volume detach-tier start: failed: volume ozone is not a tier volume Tier command failed [root@dhcp37-55 ~]# gluster v detach-tier ozone stop volume tier detach stop: failed: volume ozone is not a tier volume. Tier command failed [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# gluster v tier Usage: volume tier <VOLNAME> status volume tier <VOLNAME> attach [<replica COUNT>] <NEW-BRICK>... volume tier <VOLNAME> detach <start|stop|status|commit|[force]> Tier command failed [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# gluster v tier ozone detach stop volume tier detach stop: failed: volume ozone is not a tier volume. Tier command failed [root@dhcp37-55 ~]# gluster v tier ozone detach start volume detach-tier start: failed: volume ozone is not a tier volume Tier command failed [root@dhcp37-55 ~]# gluster v tier ozone detach status volume tier detach status: failed: volume ozone is not a tier volume. Tier command failed [root@dhcp37-55 ~]# gluster v tier ozone detach commit Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit: failed: volume ozone is not a tier volume Tier command failed [root@dhcp37-55 ~]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html