Bug 1367807 - performance.client-io-threads: issueing a rebalance immediately after stopping a current rebalance fails
Summary: performance.client-io-threads: issueing a rebalance immediately after stoppin...
Keywords:
Status: CLOSED DUPLICATE of bug 1299334
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Nithya Balachandran
QA Contact: Prasad Desala
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-17 14:17 UTC by Nag Pavan Chilakam
Modified: 2016-09-16 05:05 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-09-16 05:05:02 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2016-08-17 14:17:11 UTC
Description of problem:
=======================
created a 1x(4+2) volume and pumped data of about 70GB with following options
Options Reconfigured:
server.event-threads: 4
client.event-threads: 4
performance.client-io-threads: on
performance.readdir-ahead: on

Added another 4+2 distribute subvol and triggered rebalance

While the rebalance was happening(it showed completed on all other nodes while the localhost showed as inprogress), I triggered a rebalance stop==>stopped immediately.

I then immediately triggered rebalance start again, which said as below
[root@dhcp35-179 glusterfs]# gluster v rebal  ecvol42-clientio start
volume rebalance: ecvol42-clientio: failed: Rebalance on ecvol42-clientio is already started
[root@dhcp35-179 glusterfs]# gluster v info ecvol42-clientio

However , the rebalance status doesnt show any change

However, if i wait for some time say 5-10 min, then i am able to retrigger successfully.


Version-Release number of selected component (if applicable):

[root@dhcp35-179 glusterfs]# rpm -qa|grep gluster
glusterfs-geo-replication-3.7.9-10.el7rhgs.x86_64
glusterfs-api-3.7.9-10.el7rhgs.x86_64
gluster-nagios-addons-0.2.7-1.el7rhgs.x86_64
vdsm-gluster-4.16.30-1.5.el7rhgs.noarch
python-gluster-3.7.9-10.el7rhgs.noarch
glusterfs-libs-3.7.9-10.el7rhgs.x86_64
glusterfs-client-xlators-3.7.9-10.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-rdma-3.7.9-10.el7rhgs.x86_64
glusterfs-3.7.9-10.el7rhgs.x86_64
glusterfs-fuse-3.7.9-10.el7rhgs.x86_64
glusterfs-cli-3.7.9-10.el7rhgs.x86_64
glusterfs-server-3.7.9-10.el7rhgs.x86_64
[root@dhcp35-179 glusterfs]# 

How reproducible:
============
always

Comment 2 Nithya Balachandran 2016-09-16 05:05:02 UTC
Marking this a duplicate of 1299334.

*** This bug has been marked as a duplicate of bug 1299334 ***


Note You need to log in before you can comment on or make changes to this bug.