Bug 1258441 - Data Tiering: Need to change task name to tier-rebalance for the tier deamon showing under tier vol status
Data Tiering: Need to change task name to tier-rebalance for the tier deamon ...
Status: CLOSED EOL
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
3.7.5
Unspecified Unspecified
high Severity low
: ---
: ---
Assigned To: hari gowtham
bugs@gluster.org
: Reopened, Triaged
Depends On:
Blocks: 1260923
  Show dependency treegraph
 
Reported: 2015-08-31 07:54 EDT by nchilaka
Modified: 2017-03-08 05:47 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-03-08 05:47:55 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description nchilaka 2015-08-31 07:54:21 EDT
Description of problem:
======================
when we issue a tier vol status,it shows under tasks the rebalance task.
Though it is rebalance related, it is precisely tier-rebalance process. Hence we must mark it as tier rebalancer so to clear ambiguities.
 

The current task display is same as rebalance task when triggered by a user manually on a regular volume.

We need to have a differentiation b/w both to help us comprehend better for any purpose.



Tier volume:
============
[root@nag-manual-node1 ~]# gluster v status vol100
Status of volume: vol100
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.46.36:/rhs/brick3/vol100        49182     0          Y       27701
Brick 10.70.46.84:/rhs/brick3/vol100        49182     0          Y       2296 
Cold Bricks:
Brick 10.70.46.84:/rhs/brick1/vol100        49178     0          Y       805  
Brick 10.70.46.36:/rhs/brick1/vol100        49178     0          Y       26427
NFS Server on localhost                     2049      0          Y       2315 
NFS Server on 10.70.46.36                   2049      0          Y       27720
 
Task Status of Volume vol100
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : 23050ef8-fe7b-4c3e-ad88-55127fe62629
Status               : in progress         
 




Regular volume rebalance:
========================
[root@nag-manual-node1 ~]# gluster v info vol100
 
Volume Name: vol100
Type: Distribute
Volume ID: a26d987e-9d38-4ddf-b014-7c4c3b444dad
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.70.46.84:/rhs/brick1/vol100
Brick2: 10.70.46.36:/rhs/brick1/vol100
Options Reconfigured:
performance.readdir-ahead: on

[root@nag-manual-node1 ~]# gluster v rebalance vol100 start
volume rebalance: vol100: success: Rebalance on vol100 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 932dd6f1-49cd-4d62-9089-e4d3d6cec46b

[root@nag-manual-node1 ~]# gluster v status vol100
Status of volume: vol100
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.46.84:/rhs/brick1/vol100        49178     0          Y       805  
Brick 10.70.46.36:/rhs/brick1/vol100        49178     0          Y       26427
NFS Server on localhost                     2049      0          Y       2414 
NFS Server on 10.70.46.36                   2049      0          Y       27787
 
Task Status of Volume vol100
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : 932dd6f1-49cd-4d62-9089-e4d3d6cec46b
Status               : completed
Comment 1 Mohammed Rafi KC 2015-09-01 08:31:53 EDT

*** This bug has been marked as a duplicate of bug 1258340 ***
Comment 2 hari gowtham 2015-09-11 03:19:30 EDT
This bug is fixed along with the patch sent for another bug (1261837 - master).

The url for this fix in master is: http://review.gluster.org/#/c/12149/
Comment 3 hari gowtham 2015-09-29 02:41:23 EDT
The url for the fix in 3.7 is: http://review.gluster.org/#/c/12203/
Comment 4 Kaushal 2017-03-08 05:47:55 EST
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Note You need to log in before you can comment on or make changes to this bug.