Bug 1236020 - Data Tiering: Change the error message when a detach-tier status is issued on a non-tier volume
Summary: Data Tiering: Change the error message when a detach-tier status is issued on...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: RHGS 3.1.2
Assignee: hari gowtham
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1245935 1248337 1260783 1260923 1284357 1285793
TreeView+ depends on / blocked
 
Reported: 2015-06-26 11:15 UTC by Nag Pavan Chilakam
Modified: 2016-09-17 15:41 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.7.5-8
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1245935 1284357 (view as bug list)
Environment:
Last Closed: 2016-03-01 05:26:19 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Nag Pavan Chilakam 2015-06-26 11:15:09 UTC
Description of problem:
=======================
When we issue a detacht-tier start or commit on a non-tier volume, it throws the right message by saying it is not a tier volume as below:
[root@tettnang glusterfs]# gluster v detach-tier vol2 commit
volume detach-tier commit: failed: volume vol2 is not a tier volume
[root@tettnang glusterfs]# gluster v detach-tier vol2 start
volume detach-tier start: failed: volume vol2 is not a tier volume


But, if we issue a status of detach-tier, it instead of saying it is not a tiered volume, it says the detach-tier has not yet been started.
This is  a bit improper

performance.readdir-ahead: on
[root@tettnang glusterfs]# gluster v info vol2
 
Volume Name: vol2
Type: Distributed-Replicate
Volume ID: d0c054d0-a72c-4f66-adb0-e67a04a6267e
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: yarrow:/rhs/brick1/vol2
Brick2: zod:/rhs/brick1/vol2
Brick3: yarrow:/rhs/brick2/vol2
Brick4: zod:/rhs/brick2/vol2
Options Reconfigured:
performance.readdir-ahead: on
[root@tettnang glusterfs]# gluster v detach-tier vol2 status
volume detach-tier status: failed: Detach-tier not started.




Version-Release number of selected component (if applicable):
==============================================================
[root@tettnang glusterfs]# gluster --version
glusterfs 3.7.1 built on Jun 23 2015 22:08:15
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@tettnang glusterfs]# rpm -qa|grep gluster
glusterfs-api-3.7.1-5.el7rhgs.x86_64
glusterfs-libs-3.7.1-5.el7rhgs.x86_64
glusterfs-rdma-3.7.1-5.el7rhgs.x86_64
glusterfs-3.7.1-5.el7rhgs.x86_64
glusterfs-cli-3.7.1-5.el7rhgs.x86_64
glusterfs-debuginfo-3.7.1-5.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-5.el7rhgs.x86_64
glusterfs-server-3.7.1-5.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-5.el7rhgs.x86_64
glusterfs-fuse-3.7.1-5.el7rhgs.x86_64
[root@tettnang glusterfs]# 



How reproducible:
==================
easily and everytime


Steps to Reproduce:
==================
1.create a regular non-tier volume
2.issue "gluster v detach-tier <volname> status" command


Actual results:
================
it says detach-tier has not been started

Expected results:
=================
should say, it is not a tiered volume

Comment 4 Nag Pavan Chilakam 2015-10-30 12:02:29 UTC
It fails when we issue a "gluster v tier <vname> detach status" as below

[root@zod ~]# gluster v tier ctr_set detach status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------

Fix works in below case:


[root@zod ~]# 
[root@zod ~]# gluster v detach-tier ctr_set  status
volume detach-tier status: failed: volume ctr_set is not a tier volume.
Tier command failed





Hence moving it to failed_qa

Comment 5 hari gowtham 2015-11-09 07:22:53 UTC
I find that the Bug 1236020 throws the correct warning.

[root@hgowtham-lap glusterfs]# gluster v detach-tier v2 status
volume detach-tier status: failed: volume v2 is not a tier volume.
Tier command failed

I got it as I have mentioned above following your steps for reproduction.
need suggestion as what to do on this bug.

Comment 6 Nag Pavan Chilakam 2015-11-10 05:52:24 UTC
Hi Hari,
that's what I have mentioned in my validation, re-iterating the same:
It WORKS when we issue "gluster v detach-tier <vname> status"
But it also has to work when we issue "gluster v tier <vname>  detach status". This is the newer CLI and we may be sticking to this going forward

Comment 7 Sweta Anandpara 2015-12-04 14:39:01 UTC
Tested and verified this bug on the build glusterfs-3.7.5-9.el7rhgs.x86_64

Had a tiered volume and detached it with the gluster volume detach-tier command. After the volume became a regular volume, issued 
gluster volume detach-tier status/start/stop/commit 
as well as 
gluster volume tier <volname> detach status/start/stop/commit

All the outputs were as expected- 'volume <volname> is not a tiered volume'

Pasted below are the logs. Moving this bug to verified in 3.1.2

[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# gluster peer status
Number of Peers: 3

Hostname: 10.70.37.210
Uuid: c9541f69-4078-4683-87ff-a6add25a4b47
State: Peer in Cluster (Connected)

Hostname: 10.70.37.203
Uuid: e7a0436d-53c9-4e32-8342-8e92a8cca24e
State: Peer in Cluster (Connected)

Hostname: 10.70.37.141
Uuid: 374a4941-f16d-412f-b7ac-1ed50a534003
State: Peer in Cluster (Connected)
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# gluster v list
gluster_shared_storage
nash
ozone
testvol
tmp_vol
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# rpm -qa | grep gluster
nfs-ganesha-gluster-2.2.0-11.el7rhgs.x86_64
glusterfs-cli-3.7.5-9.el7rhgs.x86_64
glusterfs-3.7.5-9.el7rhgs.x86_64
glusterfs-api-3.7.5-9.el7rhgs.x86_64
glusterfs-ganesha-3.7.5-9.el7rhgs.x86_64
glusterfs-libs-3.7.5-9.el7rhgs.x86_64
glusterfs-fuse-3.7.5-9.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-9.el7rhgs.x86_64
glusterfs-server-3.7.5-9.el7rhgs.x86_64
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# gluster v info tmp_vol
 
Volume Name: tmp_vol
Type: Distribute
Volume ID: c933815f-8767-4d8d-9870-7135ae0797bb
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.70.37.55:/rhs/tmp_brick
Options Reconfigured:
ganesha.enable: off
features.cache-invalidation: on
nfs.disable: on
performance.readdir-ahead: on
nfs-ganesha: disable
cluster.enable-shared-storage: enable
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# gluster v detach-tier
Usage: volume detach-tier <VOLNAME>  <start|stop|status|commit|force>
Tier command failed
[root@dhcp37-55 ~]# gluster v detach-tier tmp_vol start
volume detach-tier start: failed: volume tmp_vol is not a tier volume
Tier command failed
[root@dhcp37-55 ~]# gluster v detach-tier tmp_vol stop
volume tier detach stop: failed: Volume tmp_vol is not a distribute volume or contains only 1 brick.
Not performing rebalance
Tier command failed
[root@dhcp37-55 ~]# gluster v list
gluster_shared_storage
nash
ozone
testvol
tmp_vol
[root@dhcp37-55 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Tier
Volume ID: a0c3186e-09a0-4739-8f6f-5338f14c8f35
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.37.141:/rhs/thinbrick2/ozone
Brick2: 10.70.37.203:/rhs/thinbrick2/ozone
Brick3: 10.70.37.141:/rhs/thinbrick1/ozone
Brick4: 10.70.37.203:/rhs/thinbrick1/ozone
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: 10.70.37.55:/rhs/thinbrick1/ozone
Brick6: 10.70.37.210:/rhs/thinbrick1/ozone
Brick7: 10.70.37.55:/rhs/thinbrick2/ozone
Brick8: 10.70.37.210:/rhs/thinbrick2/ozone
Options Reconfigured:
cluster.write-freq-threshold: 5
features.record-counters: on
cluster.tier-mode: test
features.ctr-enabled: on
nfs.disable: off
performance.readdir-ahead: on
ganesha.enable: off
nfs-ganesha: disable
cluster.enable-shared-storage: enable
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# gluster v detach-tier
Usage: volume detach-tier <VOLNAME>  <start|stop|status|commit|force>
Tier command failed
[root@dhcp37-55 ~]# gluster v detach-tier ozone stop
volume tier detach stop: failed: Detach-tier not started
Tier command failed
[root@dhcp37-55 ~]# gluster v detach-tier ozone status
volume tier detach status: failed: Detach-tier not started
Tier command failed
[root@dhcp37-55 ~]# gluster v detach-tier ozone commit
Removing tier can result in data loss. Do you want to Continue? (y/n) y
volume detach-tier commit: failed: Brick's in Hot tier is not decommissioned yet. Use gluster volume detach-tier <VOLNAME> <start | commit | force> command instead
Tier command failed
[root@dhcp37-55 ~]# gluster v detach-tier ozone start
volume detach-tier start: success
ID: d7ad8a8d-4287-4d36-bf0d-d4606424671e
[root@dhcp37-55 ~]# gluster v detach-tier ozone status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                            10.70.37.203                0        0Bytes             0             0             0            completed               0.00
                            10.70.37.141                0        0Bytes             0             0             0            completed               0.00
[root@dhcp37-55 ~]# gluster v detach-tier ozone commit
Removing tier can result in data loss. Do you want to Continue? (y/n) y
volume detach-tier commit: success
Check the detached bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. 
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: a0c3186e-09a0-4739-8f6f-5338f14c8f35
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.37.55:/rhs/thinbrick1/ozone
Brick2: 10.70.37.210:/rhs/thinbrick1/ozone
Brick3: 10.70.37.55:/rhs/thinbrick2/ozone
Brick4: 10.70.37.210:/rhs/thinbrick2/ozone
Options Reconfigured:
cluster.write-freq-threshold: 5
features.record-counters: on
nfs.disable: off
performance.readdir-ahead: on
ganesha.enable: off
nfs-ganesha: disable
cluster.enable-shared-storage: enable
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# gluster v status ozone
Status of volume: ozone
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.55:/rhs/thinbrick1/ozone     49372     0          Y       1427 
Brick 10.70.37.210:/rhs/thinbrick1/ozone    49370     0          Y       32041
Brick 10.70.37.55:/rhs/thinbrick2/ozone     49373     0          Y       1436 
Brick 10.70.37.210:/rhs/thinbrick2/ozone    49371     0          Y       32052
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       12738
NFS Server on 10.70.37.210                  2049      0          Y       24444
Self-heal Daemon on 10.70.37.210            N/A       N/A        Y       24452
NFS Server on 10.70.37.141                  2049      0          Y       26199
Self-heal Daemon on 10.70.37.141            N/A       N/A        Y       26207
NFS Server on 10.70.37.203                  2049      0          Y       32431
Self-heal Daemon on 10.70.37.203            N/A       N/A        Y       32439
 
Task Status of Volume ozone
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp37-55 ~]# gluster v detach-tier ozone status
volume tier detach status: failed: volume ozone is not a tier volume.
Tier command failed
[root@dhcp37-55 ~]# gluster v detach-tier ozone start
volume detach-tier start: failed: volume ozone is not a tier volume
Tier command failed
[root@dhcp37-55 ~]# gluster v detach-tier ozone stop
volume tier detach stop: failed: volume ozone is not a tier volume.
Tier command failed
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# gluster v tier
Usage: volume tier <VOLNAME> status
volume tier <VOLNAME> attach [<replica COUNT>] <NEW-BRICK>...
volume tier <VOLNAME> detach <start|stop|status|commit|[force]>

Tier command failed
[root@dhcp37-55 ~]# 
[root@dhcp37-55 ~]# gluster v tier ozone detach stop
volume tier detach stop: failed: volume ozone is not a tier volume.
Tier command failed
[root@dhcp37-55 ~]# gluster v tier ozone detach start
volume detach-tier start: failed: volume ozone is not a tier volume
Tier command failed
[root@dhcp37-55 ~]# gluster v tier ozone detach status
volume tier detach status: failed: volume ozone is not a tier volume.
Tier command failed
[root@dhcp37-55 ~]# gluster v tier ozone detach commit
Removing tier can result in data loss. Do you want to Continue? (y/n) y
volume detach-tier commit: failed: volume ozone is not a tier volume
Tier command failed
[root@dhcp37-55 ~]#

Comment 9 errata-xmlrpc 2016-03-01 05:26:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.