Bug 1223305 - data tiering:rebalance triggering automatically and not completing at all on tiered volume
Summary: data tiering:rebalance triggering automatically and not completing at all on ...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On:
Blocks: 1223636
TreeView+ depends on / blocked
 
Reported: 2015-05-20 09:55 UTC by Nag Pavan Chilakam
Modified: 2016-09-17 15:42 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-06-09 14:51:49 UTC
Embargoed:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2015-05-20 09:55:12 UTC
Description of problem:
=====================
rebalance is trigerring automatically once a tier is attached to volume.
Firstly, why is this happening and also
the rebalance is in progress indefinitely(for more than 2hrs)

Version-Release number of selected component (if applicable):
root@zod's password: 
Last login: Wed May 20 13:21:02 2015 from 10.10.50.190
[root@zod ~]# gluste --version
bash: gluste: command not found...
[root@zod ~]# gluster --version
glusterfs 3.7.0 built on May 15 2015 01:33:40
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@zod ~]# rpm -qa|grep gluster
glusterfs-debuginfo-3.7.0-2.el7rhs.x86_64
glusterfs-geo-replication-3.7.0-2.el7rhs.x86_64
glusterfs-client-xlators-3.7.0-2.el7rhs.x86_64
glusterfs-cli-3.7.0-2.el7rhs.x86_64
glusterfs-libs-3.7.0-2.el7rhs.x86_64
glusterfs-api-3.7.0-2.el7rhs.x86_64
glusterfs-server-3.7.0-2.el7rhs.x86_64
glusterfs-resource-agents-3.7.0-2.el7rhs.noarch
glusterfs-rdma-3.7.0-2.el7rhs.x86_64
glusterfs-devel-3.7.0-2.el7rhs.x86_64
glusterfs-api-devel-3.7.0-2.el7rhs.x86_64
glusterfs-3.7.0-2.el7rhs.x86_64
glusterfs-fuse-3.7.0-2.el7rhs.x86_64
[root@zod ~]# gluster v info



Steps to Reproduce:
1.create a dist-rep vol and start it
2.attach a tier(pure distribute)
it can be observed that reablance triggers immediately but doenst complete at all

[root@zod ~]# gluster v create vol2 replica 2 10.70.35.144:/brick_200G_1/vol2  yarrow:/brick_200G_1/vol2 10.70.35.144:/brick_200G_2/vol2 yarrow:/brick_200G_2/vol2 
volume create: vol2: success: please start the volume to access data
[root@zod ~]# gluster v start vol2
gluster v attach-tier volume start: vol2: success
[root@zod ~]# gluster v attach-tier vol2 yarrow:/ssdbricks_75G_1/vol2 10.70.35.144:/ssdbricks_75G_1/vol2 
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
gluster v involume attach-tier: success
fo vol2
volume rebalance: vol2: success: Rebalance on vol2 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: aaea82ea-ddd5-4ae5-ad68-5343da3a29c8

[root@zod ~]# gluster v info vol2
 
Volume Name: vol2
Type: Tier
Volume ID: 858ae0b9-0cc9-41a9-b89b-d42e6791e2d7
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.35.144:/ssdbricks_75G_1/vol2
Brick2: yarrow:/ssdbricks_75G_1/vol2
Cold Bricks:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick3: 10.70.35.144:/brick_200G_1/vol2
Brick4: yarrow:/brick_200G_1/vol2
Brick5: 10.70.35.144:/brick_200G_2/vol2
Brick6: yarrow:/brick_200G_2/vol2
Options Reconfigured:
performance.readdir-ahead: on
[root@zod ~]# gluster v info vol2
 
Volume Name: vol2
Type: Tier
Volume ID: 858ae0b9-0cc9-41a9-b89b-d42e6791e2d7
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.35.144:/ssdbricks_75G_1/vol2
Brick2: yarrow:/ssdbricks_75G_1/vol2
Cold Bricks:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick3: 10.70.35.144:/brick_200G_1/vol2
Brick4: yarrow:/brick_200G_1/vol2
Brick5: 10.70.35.144:/brick_200G_2/vol2
Brick6: yarrow:/brick_200G_2/vol2
Options Reconfigured:
performance.readdir-ahead: on
[root@zod ~]# gluster v status vol2
Status of volume: vol2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.35.144:/ssdbricks_75G_1/vol2    49157     0          Y       5742 
Brick yarrow:/ssdbricks_75G_1/vol2          49157     0          Y       25289
Brick 10.70.35.144:/brick_200G_1/vol2       49155     0          Y       5618 
Brick yarrow:/brick_200G_1/vol2             49155     0          Y       25173
Brick 10.70.35.144:/brick_200G_2/vol2       49156     0          Y       5636 
Brick yarrow:/brick_200G_2/vol2             49156     0          Y       25192
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on yarrow                        N/A       N/A        N       N/A  
 
Task Status of Volume vol2
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : aaea82ea-ddd5-4ae5-ad68-5343da3a29c8
Status               : in progress         
 
[root@zod ~]#

Comment 2 Mohammed Rafi KC 2015-06-09 14:51:49 UTC
This is a design change, hence closing this bug.


Note You need to log in before you can comment on or make changes to this bug.