Bug 1419868

Summary: removing old tier commands under the rebalance commands
Product: [Community] GlusterFS Reporter: hari gowtham <hgowtham>
Component: tieringAssignee: hari gowtham <hgowtham>
Status: CLOSED CURRENTRELEASE QA Contact: bugs <bugs>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.10CC: amukherj, bugs
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.10.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1415590 Environment:
Last Closed: 2017-03-06 17:45:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1415590    
Bug Blocks:    

Description hari gowtham 2017-02-07 09:50:40 UTC
+++ This bug was initially created as a clone of Bug #1415590 +++

Description of problem:
gluster v rebalance <volname> tier {start|status} spawns a new tier daemon and shows status for that.
The tier daemon spawned under service framework and the one with rebalance command are different and we end up with two tier daemons. the older one has to be removed. 

Version-Release number of selected component (if applicable):
mainline

How reproducible:
100%

Steps to Reproduce:
1.create a volume (any configuration)
2.attach a tier to this volume using the gluster v tier <volname> attach ... command (spawns a tier daemon)
3.issue gluster v rebalance <volname> tier start (spawns another tier for the same volume)

Actual results:
creates a new daemon and ends up showing the status for that daemon also.

Expected results:
gluster v rebalance <volname> tier start/status should not create new daemon. or should connect to the actual tier daemon from service framework.

Additional info:
both the commands can be made to work with one tierd but that will make the new tierd under service framework ugly and hard to maintain. As the rebalance commands are being separated from tier commands, its better to remove the support for the older tier commands under rebalance.

--- Additional comment from Worker Ant on 2017-01-25 02:02:58 EST ---

REVIEW: https://review.gluster.org/16463 (CLI/TIER: removing old tier commands under rebalance) posted (#2) for review on master by hari gowtham (hari.gowtham005)

--- Additional comment from Worker Ant on 2017-02-06 02:51:02 EST ---

REVIEW: https://review.gluster.org/16463 (CLI/TIER: removing old tier commands under rebalance) posted (#3) for review on master by hari gowtham (hari.gowtham005)

--- Additional comment from Worker Ant on 2017-02-07 01:36:30 EST ---

COMMIT: https://review.gluster.org/16463 committed in master by Atin Mukherjee (amukherj) 
------
commit 563cafb5a5e742fc7fd2c175b332f0000c053040
Author: hari gowtham <hgowtham>
Date:   Tue Jan 24 14:24:47 2017 +0530

    CLI/TIER: removing old tier commands under rebalance
    
    PROBLEM: gluster v rebalance <volname> tier start works even after
    the switch of tier to service framework.
    This lets the user have two tierd for the same volume.
    
    FIX: checking for each process will make the new code hard
    to maintain. So we are removing the support for old commands.
    
    Change-Id: I5b0974b2dbb74f0bee8344b61c7f924300ad73f2
    BUG: 1415590
    Signed-off-by: hari gowtham <hgowtham>
    Reviewed-on: https://review.gluster.org/16463
    Smoke: Gluster Build System <jenkins.org>
    Tested-by: hari gowtham <hari.gowtham005>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: N Balachandran <nbalacha>
    Reviewed-by: Atin Mukherjee <amukherj>

Comment 1 Worker Ant 2017-02-07 09:53:45 UTC
REVIEW: https://review.gluster.org/16555 (CLI/TIER: removing old tier commands under rebalance) posted (#1) for review on release-3.10 by hari gowtham (hari.gowtham005)

Comment 2 Worker Ant 2017-02-07 14:28:42 UTC
COMMIT: https://review.gluster.org/16555 committed in release-3.10 by Shyamsundar Ranganathan (srangana) 
------
commit c13a39e6c425622221226e5a3c49aafbf430a07d
Author: hari gowtham <hgowtham>
Date:   Tue Jan 24 14:24:47 2017 +0530

    CLI/TIER: removing old tier commands under rebalance
    
            back-port of : https://review.gluster.org/#/c/16463/
    
    PROBLEM: gluster v rebalance <volname> tier start works even after
    the switch of tier to service framework.
    This lets the user have two tierd for the same volume.
    
    FIX: checking for each process will make the new code hard
    to maintain. So we are removing the support for old commands.
    
    >Change-Id: I5b0974b2dbb74f0bee8344b61c7f924300ad73f2
    >BUG: 1415590
    >Signed-off-by: hari gowtham <hgowtham>
    >Reviewed-on: https://review.gluster.org/16463
    >Smoke: Gluster Build System <jenkins.org>
    >Tested-by: hari gowtham <hari.gowtham005>
    >NetBSD-regression: NetBSD Build System <jenkins.org>
    >CentOS-regression: Gluster Build System <jenkins.org>
    >Reviewed-by: N Balachandran <nbalacha>
    >Reviewed-by: Atin Mukherjee <amukherj>
    
    Change-Id: Ib996d89b1bd250176a3f5eeb369b71b0a4f95968
    BUG: 1419868
    Signed-off-by: hari gowtham <hgowtham>
    Reviewed-on: https://review.gluster.org/16555
    Smoke: Gluster Build System <jenkins.org>
    Tested-by: hari gowtham <hari.gowtham005>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Atin Mukherjee <amukherj>
    CentOS-regression: Gluster Build System <jenkins.org>

Comment 3 Shyamsundar 2017-03-06 17:45:52 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/