Bug 1258441

Summary: Data Tiering: Need to change task name to tier-rebalance for the tier deamon showing under tier vol status
Product: [Community] GlusterFS Reporter: Nag Pavan Chilakam <nchilaka>
Component: tieringAssignee: hari gowtham <hgowtham>
Status: CLOSED EOL QA Contact: bugs <bugs>
Severity: low Docs Contact:
Priority: high    
Version: 3.7.5CC: bugs, rkavunga, sankarshan
Target Milestone: ---Keywords: Reopened, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-03-08 10:47:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1260923    

Description Nag Pavan Chilakam 2015-08-31 11:54:21 UTC
Description of problem:
======================
when we issue a tier vol status,it shows under tasks the rebalance task.
Though it is rebalance related, it is precisely tier-rebalance process. Hence we must mark it as tier rebalancer so to clear ambiguities.
 

The current task display is same as rebalance task when triggered by a user manually on a regular volume.

We need to have a differentiation b/w both to help us comprehend better for any purpose.



Tier volume:
============
[root@nag-manual-node1 ~]# gluster v status vol100
Status of volume: vol100
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.46.36:/rhs/brick3/vol100        49182     0          Y       27701
Brick 10.70.46.84:/rhs/brick3/vol100        49182     0          Y       2296 
Cold Bricks:
Brick 10.70.46.84:/rhs/brick1/vol100        49178     0          Y       805  
Brick 10.70.46.36:/rhs/brick1/vol100        49178     0          Y       26427
NFS Server on localhost                     2049      0          Y       2315 
NFS Server on 10.70.46.36                   2049      0          Y       27720
 
Task Status of Volume vol100
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : 23050ef8-fe7b-4c3e-ad88-55127fe62629
Status               : in progress         
 




Regular volume rebalance:
========================
[root@nag-manual-node1 ~]# gluster v info vol100
 
Volume Name: vol100
Type: Distribute
Volume ID: a26d987e-9d38-4ddf-b014-7c4c3b444dad
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.70.46.84:/rhs/brick1/vol100
Brick2: 10.70.46.36:/rhs/brick1/vol100
Options Reconfigured:
performance.readdir-ahead: on

[root@nag-manual-node1 ~]# gluster v rebalance vol100 start
volume rebalance: vol100: success: Rebalance on vol100 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 932dd6f1-49cd-4d62-9089-e4d3d6cec46b

[root@nag-manual-node1 ~]# gluster v status vol100
Status of volume: vol100
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.46.84:/rhs/brick1/vol100        49178     0          Y       805  
Brick 10.70.46.36:/rhs/brick1/vol100        49178     0          Y       26427
NFS Server on localhost                     2049      0          Y       2414 
NFS Server on 10.70.46.36                   2049      0          Y       27787
 
Task Status of Volume vol100
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : 932dd6f1-49cd-4d62-9089-e4d3d6cec46b
Status               : completed

Comment 1 Mohammed Rafi KC 2015-09-01 12:31:53 UTC

*** This bug has been marked as a duplicate of bug 1258340 ***

Comment 2 hari gowtham 2015-09-11 07:19:30 UTC
This bug is fixed along with the patch sent for another bug (1261837 - master).

The url for this fix in master is: http://review.gluster.org/#/c/12149/

Comment 3 hari gowtham 2015-09-29 06:41:23 UTC
The url for the fix in 3.7 is: http://review.gluster.org/#/c/12203/

Comment 4 Kaushal 2017-03-08 10:47:55 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.