Bug 1415590 - removing old tier commands under the rebalance commands
Summary: removing old tier commands under the rebalance commands
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: hari gowtham
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: 1419868
TreeView+ depends on / blocked
 
Reported: 2017-01-23 07:40 UTC by hari gowtham
Modified: 2017-05-30 18:39 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.11.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1419868 (view as bug list)
Environment:
Last Closed: 2017-05-30 18:39:31 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description hari gowtham 2017-01-23 07:40:21 UTC
Description of problem:
gluster v rebalance <volname> tier {start|status} spawns a new tier daemon and shows status for that.
The tier daemon spawned under service framework and the one with rebalance command are different and we end up with two tier daemons. the older one has to be removed. 

Version-Release number of selected component (if applicable):
mainline

How reproducible:
100%

Steps to Reproduce:
1.create a volume (any configuration)
2.attach a tier to this volume using the gluster v tier <volname> attach ... command (spawns a tier daemon)
3.issue gluster v rebalance <volname> tier start (spawns another tier for the same volume)

Actual results:
creates a new daemon and ends up showing the status for that daemon also.

Expected results:
gluster v rebalance <volname> tier start/status should not create new daemon. or should connect to the actual tier daemon from service framework.

Additional info:
both the commands can be made to work with one tierd but that will make the new tierd under service framework ugly and hard to maintain. As the rebalance commands are being separated from tier commands, its better to remove the support for the older tier commands under rebalance.

Comment 1 Worker Ant 2017-01-25 07:02:58 UTC
REVIEW: https://review.gluster.org/16463 (CLI/TIER: removing old tier commands under rebalance) posted (#2) for review on master by hari gowtham (hari.gowtham005)

Comment 2 Worker Ant 2017-02-06 07:51:02 UTC
REVIEW: https://review.gluster.org/16463 (CLI/TIER: removing old tier commands under rebalance) posted (#3) for review on master by hari gowtham (hari.gowtham005)

Comment 3 Worker Ant 2017-02-07 06:36:30 UTC
COMMIT: https://review.gluster.org/16463 committed in master by Atin Mukherjee (amukherj) 
------
commit 563cafb5a5e742fc7fd2c175b332f0000c053040
Author: hari gowtham <hgowtham>
Date:   Tue Jan 24 14:24:47 2017 +0530

    CLI/TIER: removing old tier commands under rebalance
    
    PROBLEM: gluster v rebalance <volname> tier start works even after
    the switch of tier to service framework.
    This lets the user have two tierd for the same volume.
    
    FIX: checking for each process will make the new code hard
    to maintain. So we are removing the support for old commands.
    
    Change-Id: I5b0974b2dbb74f0bee8344b61c7f924300ad73f2
    BUG: 1415590
    Signed-off-by: hari gowtham <hgowtham>
    Reviewed-on: https://review.gluster.org/16463
    Smoke: Gluster Build System <jenkins.org>
    Tested-by: hari gowtham <hari.gowtham005>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: N Balachandran <nbalacha>
    Reviewed-by: Atin Mukherjee <amukherj>

Comment 4 Shyamsundar 2017-05-30 18:39:31 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report.

glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.