Bug 1224164 - data tiering: detach tier status not working
Summary: data tiering: detach tier status not working
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: RHGS 3.1.0
Assignee: Mohammed Rafi KC
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On:
Blocks: 1202842
TreeView+ depends on / blocked
 
Reported: 2015-05-22 09:40 UTC by Nag Pavan Chilakam
Modified: 2016-09-17 15:37 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.7.1-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-29 04:48:24 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Nag Pavan Chilakam 2015-05-22 09:40:57 UTC
Description of problem:
======================
when i issue a detach-tier status, it is not showing any useful info.
I issued a detach-tier start and wanted to see status, but not visible

gluster> volume detach-tier vol1 status
volume detach-tier unknown: success


[root@zod ~]# gluster v detach-tier  vol2 start
volume detach-tier start: failed: Commit failed on localhost. Please check the log file for more details.
[root@zod ~]# gluster v info vol2
 
Volume Name: vol2
Type: Tier
Volume ID: 858ae0b9-0cc9-41a9-b89b-d42e6791e2d7
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.35.144:/ssdbricks_75G_1/vol2
Brick2: yarrow:/ssdbricks_75G_1/vol2
Cold Bricks:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick3: 10.70.35.144:/brick_200G_1/vol2
Brick4: yarrow:/brick_200G_1/vol2
Brick5: 10.70.35.144:/brick_200G_2/vol2
Brick6: yarrow:/brick_200G_2/vol2
Options Reconfigured:
performance.readdir-ahead: on
[root@zod ~]# gluster v detach-tier  vol2 start force
volume detach-tier start: failed: An earlier remove-brick task exists for volume vol2. Either commit it or stop it before starting a new task.
[root@zod ~]# gluster v info
 
Volume Name: vol1
Type: Tier
Volume ID: 23dba7de-a94a-49c2-80f1-6d97b0ab1309
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: yarrow:/ssdbricks_75G_1/vol1
Brick2: 10.70.35.144:/ssdbricks_75G_1/vol1
Cold Bricks:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick3: 10.70.35.144:/brick_200G_1/vol1
Brick4: yarrow:/brick_200G_1/vol1
Brick5: 10.70.35.144:/brick_200G_2/vol1
Brick6: yarrow:/brick_200G_2/vol1
Options Reconfigured:
cluster.tier-demote-frequency: 10
features.record-counters: on
features.ctr-enabled: on
performance.readdir-ahead: on
 
Volume Name: vol2
Type: Tier
Volume ID: 858ae0b9-0cc9-41a9-b89b-d42e6791e2d7
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.35.144:/ssdbricks_75G_1/vol2
Brick2: yarrow:/ssdbricks_75G_1/vol2
Cold Bricks:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick3: 10.70.35.144:/brick_200G_1/vol2
Brick4: yarrow:/brick_200G_1/vol2
Brick5: 10.70.35.144:/brick_200G_2/vol2
Brick6: yarrow:/brick_200G_2/vol2
Options Reconfigured:
performance.readdir-ahead: on
[root@zod ~]# gluster v info vol1 
 
Volume Name: vol1
Type: Tier
Volume ID: 23dba7de-a94a-49c2-80f1-6d97b0ab1309
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: yarrow:/ssdbricks_75G_1/vol1
Brick2: 10.70.35.144:/ssdbricks_75G_1/vol1
Cold Bricks:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick3: 10.70.35.144:/brick_200G_1/vol1
Brick4: yarrow:/brick_200G_1/vol1
Brick5: 10.70.35.144:/brick_200G_2/vol1
Brick6: yarrow:/brick_200G_2/vol1
Options Reconfigured:
cluster.tier-demote-frequency: 10
features.record-counters: on
features.ctr-enabled: on
performance.readdir-ahead: on
[root@zod ~]# gluster v detach-tier vol1 start
volume detach-tier start: failed: Commit failed on localhost. Please check the log file for more details.
[root@zod ~]# cd /var/log/glusterfs/
[root@zod glusterfs]# ls
bricks   cmd_history.log           etc-glusterfs-glusterd.vol.log  geo-replication-slaves  glustershd.log-20150521  nfs.log-20150521  vol1-rebalance.log
cli.log  cmd_history.log-20150521  geo-replication                 glustershd.log          nfs.log                  snaps             vol2-rebalance.log
[root@zod glusterfs]# less etc-glusterfs-glusterd.vol.log 
[root@zod glusterfs]# gluster v--vers
unrecognized word: v--vers (position 0)
[root@zod glusterfs]# gluster --version
glusterfs 3.7.0 built on May 15 2015 01:33:40
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@zod glusterfs]# rpm -qa|grep gluster
glusterfs-debuginfo-3.7.0-2.el7rhs.x86_64
glusterfs-geo-replication-3.7.0-2.el7rhs.x86_64
glusterfs-client-xlators-3.7.0-2.el7rhs.x86_64
glusterfs-cli-3.7.0-2.el7rhs.x86_64
glusterfs-libs-3.7.0-2.el7rhs.x86_64
glusterfs-api-3.7.0-2.el7rhs.x86_64
glusterfs-server-3.7.0-2.el7rhs.x86_64
glusterfs-resource-agents-3.7.0-2.el7rhs.noarch
glusterfs-rdma-3.7.0-2.el7rhs.x86_64
glusterfs-devel-3.7.0-2.el7rhs.x86_64
glusterfs-api-devel-3.7.0-2.el7rhs.x86_64
glusterfs-3.7.0-2.el7rhs.x86_64
glusterfs-fuse-3.7.0-2.el7rhs.x86_64


[root@zod ~]# gluster v status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick yarrow:/ssdbricks_75G_1/vol1          49154     0          Y       22011
Brick 10.70.35.144:/ssdbricks_75G_1/vol1    49154     0          Y       2416 
Brick 10.70.35.144:/brick_200G_1/vol1       49152     0          Y       2187 
Brick yarrow:/brick_200G_1/vol1             49152     0          Y       21883
Brick 10.70.35.144:/brick_200G_2/vol1       49153     0          Y       2206 
Brick yarrow:/brick_200G_2/vol1             49153     0          Y       21902
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on yarrow                        N/A       N/A        N       N/A  
 
Task Status of Volume vol1
------------------------------------------------------------------------------
Task                 : Remove brick        
ID                   : b41122e6-3707-4f23-bff1-b383ec2d1c6b
Removed bricks:     
10.70.35.144:/ssdbricks_75G_1/vol1
yarrow:/ssdbricks_75G_1/vol1
Status               : completed           
 
[root@zod ~]# gluster v detach-tier vol1 status
volume detach-tier unknown: success



Steps to Reproduce:
1.create and start a tier vol
2. try to start a detach-tier operation
3.now check status


detachtier status must work with status shown to user

Comment 4 Triveni Rao 2015-06-11 16:42:47 UTC
[root@rhsqa14-vm1 ~]# gluster v detach-tier mix start
volume detach-tier start: success
ID: 5f08c911-0007-4fd5-b88f-f8ba6b3aefa2
[root@rhsqa14-vm1 ~]# 
[root@rhsqa14-vm1 ~]# gluster v detach-tier mix status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     -----------
                               localhost                0        0Bytes             0             0             0            completed               0
                            10.70.47.163                0        0Bytes             0             0             0            completed               0
[root@rhsqa14-vm1 ~]# gluster v detach-tier mix commit
volume detach-tier commit: success
Check the detached bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. 
[root@rhsqa14-vm1 ~]# 

This bug is verified:

Comment 5 Triveni Rao 2015-06-12 11:14:03 UTC
[root@rhsqa14-vm1 ~]# glusterfs --version
glusterfs 3.7.1 built on Jun  9 2015 02:31:54
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@rhsqa14-vm1 ~]# rpm -qa | grep gluster
glusterfs-3.7.1-1.el6rhs.x86_64
glusterfs-cli-3.7.1-1.el6rhs.x86_64
glusterfs-libs-3.7.1-1.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-1.el6rhs.x86_64
glusterfs-fuse-3.7.1-1.el6rhs.x86_64
glusterfs-server-3.7.1-1.el6rhs.x86_64
glusterfs-api-3.7.1-1.el6rhs.x86_64
[root@rhsqa14-vm1 ~]#

Comment 6 errata-xmlrpc 2015-07-29 04:48:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.