Bug 1229248

Summary: Data Tiering:UI:changes required to CLI responses for attach and detach tier
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Nag Pavan Chilakam <nchilaka>
Component: tierAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED ERRATA QA Contact: Nag Pavan Chilakam <nchilaka>
Severity: medium Docs Contact:
Priority: urgent    
Version: rhgs-3.1CC: amukherj, annair, asrivast, bugs, josferna, rhs-bugs, rkavunga, storage-qa-internal, trao
Target Milestone: ---Keywords: Triaged
Target Release: RHGS 3.1.0   
Hardware: Unspecified   
OS: Linux   
Whiteboard: TIERING
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1211562 Environment:
Last Closed: 2015-07-29 04:58:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1211562    
Bug Blocks: 1186580, 1202842, 1220052    

Description Nag Pavan Chilakam 2015-06-08 10:28:50 UTC
+++ This bug was initially created as a clone of Bug #1211562 +++

Description of problem:
======================
The response user gets after executing a attach-tier or detach-tier right now are quite ambigous. They talk about brick rather than the tier.
Eg: on a attach tier it says "add-brick is success"
[root@yarrow ~]# gluster v attach-tier ec_vol1 yarrow:/yarrow_ssd_75G_2/ec_vol1 rhs-client6:/brick15/ec_vol1 force
volume add-brick: success
[root@yarrow ~]# gluster v info ec_vol1


It should instead say, attaching of tier to <volname> is successful


same with detach tier, it say remove-brick is success

Need to take care even when the command fails, like it should be detach-tier fails rather than remove-brick fails


Version-Release number of selected component (if applicable):
============================================================
[root@yarrow ~]# gluster --version
glusterfs 3.7dev built on Apr  8 2015 17:57:45
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@yarrow ~]# rpm -qa|grep gluster
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64



How reproducible:
=================
always and easily


Steps to Reproduce:
==================
1.create a regular volume
2.attach a tier, the reponse user gets talks abt brick rather than tier
3.same with detaching the tier

--- Additional comment from Anand Avati on 2015-04-17 05:01:13 EDT ---

REVIEW: http://review.gluster.org/10284 (cli/tiering: Enhance cli output for tiering) posted (#1) for review on master by mohammed rafi  kc (rkavunga)

--- Additional comment from Anand Avati on 2015-05-02 08:15:10 EDT ---

REVIEW: http://review.gluster.org/10284 (cli/tiering: Enhance cli output for tiering) posted (#2) for review on master by mohammed rafi  kc (rkavunga)

--- Additional comment from Anand Avati on 2015-05-08 18:12:26 EDT ---

REVIEW: http://review.gluster.org/10284 (cli/tiering: Enhance cli output for tiering) posted (#5) for review on master by mohammed rafi  kc (rkavunga)

--- Additional comment from Anand Avati on 2015-05-09 00:27:57 EDT ---

COMMIT: http://review.gluster.org/10284 committed in master by Vijay Bellur (vbellur) 
------
commit 2676c402bc47ee89b763393e496a013e82d76e54
Author: Mohammed Rafi KC <rkavunga>
Date:   Sat May 2 17:31:07 2015 +0530

    cli/tiering: Enhance cli output for tiering
    
    Fix for handling cli output for attach-tier and
    detach-tier
    
    Change-Id: I4d17f4b09612754fe1b8cec6c2e14927029b9678
    BUG: 1211562
    Signed-off-by: Mohammed Rafi KC <rkavunga>
    Reviewed-on: http://review.gluster.org/10284
    Reviewed-by: Dan Lambright <dlambrig>
    Tested-by: Gluster Build System <jenkins.com>
    Tested-by: NetBSD Build System
    Reviewed-by: Vijay Bellur <vbellur>

--- Additional comment from Niels de Vos on 2015-05-15 09:07:34 EDT ---

This change should not be in "ON_QA", the patch posted for this bug is only available in the master branch and not in a release yet. Moving back to MODIFIED until there is an beta release for the next GlusterFS version.

--- Additional comment from Niels de Vos on 2015-05-18 02:41:39 EDT ---

Comment 3 Triveni Rao 2015-06-12 06:46:00 UTC
this bug is verified and found no issue:

[root@rhsqa14-vm1 ~]# gluster v create sun  replica 2 10.70.47.165:/rhs/brick1/m0 10.70.47.163:/rhs/brick1/m0 10.70.47.165:/rhs/brick2/m0 10.70.47.163:/rhs/brick2/m0
volume create: sun: success: please start the volume to access data
[root@rhsqa14-vm1 ~]# gluster v start sun
volume start: sun: success
[root@rhsqa14-vm1 ~]# gluster v info
 
Volume Name: sun
Type: Distributed-Replicate
Volume ID: 99251080-9431-4388-ad4c-113cb7ca0685
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.47.165:/rhs/brick1/m0
Brick2: 10.70.47.163:/rhs/brick1/m0
Brick3: 10.70.47.165:/rhs/brick2/m0
Brick4: 10.70.47.163:/rhs/brick2/m0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm1 ~]# gluster v attach-tier sun replica 2 10.70.47.165:/rhs/brick3/m0 10.70.47.163:/rhs/brick3/m0 
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: sun: success: Rebalance on sun has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 38d0d37b-dac9-4730-9f5b-ba58d1d57dcb

[root@rhsqa14-vm1 ~]# gluster v info
 
Volume Name: sun
Type: Tier
Volume ID: 99251080-9431-4388-ad4c-113cb7ca0685
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick1: 10.70.47.163:/rhs/brick3/m0
Brick2: 10.70.47.165:/rhs/brick3/m0
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick3: 10.70.47.165:/rhs/brick1/m0
Brick4: 10.70.47.163:/rhs/brick1/m0
Brick5: 10.70.47.165:/rhs/brick2/m0
Brick6: 10.70.47.163:/rhs/brick2/m0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm1 ~]# gluster v detach-tier sun start
volume detach-tier start: success
ID: e6f53bab-2ad9-4a30-afb0-724f6468e5b7
[root@rhsqa14-vm1 ~]# 


[root@rhsqa14-vm1 ~]# gluster v detach-tier sun commit
volume detach-tier commit: success
Check the detached bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. 
[root@rhsqa14-vm1 ~]# gluster v info
 
Volume Name: sun
Type: Distributed-Replicate
Volume ID: 99251080-9431-4388-ad4c-113cb7ca0685
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.47.165:/rhs/brick1/m0
Brick2: 10.70.47.163:/rhs/brick1/m0
Brick3: 10.70.47.165:/rhs/brick2/m0
Brick4: 10.70.47.163:/rhs/brick2/m0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm1 ~]#

Comment 4 Triveni Rao 2015-06-12 11:37:55 UTC
[root@rhsqa14-vm1 ~]# glusterfs --version
glusterfs 3.7.1 built on Jun  9 2015 02:31:54
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@rhsqa14-vm1 ~]# rpm -qa | grep gluster
glusterfs-3.7.1-1.el6rhs.x86_64
glusterfs-cli-3.7.1-1.el6rhs.x86_64
glusterfs-libs-3.7.1-1.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-1.el6rhs.x86_64
glusterfs-fuse-3.7.1-1.el6rhs.x86_64
glusterfs-server-3.7.1-1.el6rhs.x86_64
glusterfs-api-3.7.1-1.el6rhs.x86_64
[root@rhsqa14-vm1 ~]#

Comment 5 errata-xmlrpc 2015-07-29 04:58:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html