Bug 1367807

Summary: performance.client-io-threads: issueing a rebalance immediately after stopping a current rebalance fails
Product: Red Hat Gluster Storage Reporter: Nag Pavan Chilakam <nchilaka>
Component: distributeAssignee: Nithya Balachandran <nbalacha>
Status: CLOSED DUPLICATE QA Contact: Prasad Desala <tdesala>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: rhs-bugs, storage-qa-internal
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-09-16 05:05:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Nag Pavan Chilakam 2016-08-17 14:17:11 UTC
Description of problem:
=======================
created a 1x(4+2) volume and pumped data of about 70GB with following options
Options Reconfigured:
server.event-threads: 4
client.event-threads: 4
performance.client-io-threads: on
performance.readdir-ahead: on

Added another 4+2 distribute subvol and triggered rebalance

While the rebalance was happening(it showed completed on all other nodes while the localhost showed as inprogress), I triggered a rebalance stop==>stopped immediately.

I then immediately triggered rebalance start again, which said as below
[root@dhcp35-179 glusterfs]# gluster v rebal  ecvol42-clientio start
volume rebalance: ecvol42-clientio: failed: Rebalance on ecvol42-clientio is already started
[root@dhcp35-179 glusterfs]# gluster v info ecvol42-clientio

However , the rebalance status doesnt show any change

However, if i wait for some time say 5-10 min, then i am able to retrigger successfully.


Version-Release number of selected component (if applicable):

[root@dhcp35-179 glusterfs]# rpm -qa|grep gluster
glusterfs-geo-replication-3.7.9-10.el7rhgs.x86_64
glusterfs-api-3.7.9-10.el7rhgs.x86_64
gluster-nagios-addons-0.2.7-1.el7rhgs.x86_64
vdsm-gluster-4.16.30-1.5.el7rhgs.noarch
python-gluster-3.7.9-10.el7rhgs.noarch
glusterfs-libs-3.7.9-10.el7rhgs.x86_64
glusterfs-client-xlators-3.7.9-10.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-rdma-3.7.9-10.el7rhgs.x86_64
glusterfs-3.7.9-10.el7rhgs.x86_64
glusterfs-fuse-3.7.9-10.el7rhgs.x86_64
glusterfs-cli-3.7.9-10.el7rhgs.x86_64
glusterfs-server-3.7.9-10.el7rhgs.x86_64
[root@dhcp35-179 glusterfs]# 

How reproducible:
============
always

Comment 2 Nithya Balachandran 2016-09-16 05:05:02 UTC
Marking this a duplicate of 1299334.

*** This bug has been marked as a duplicate of bug 1299334 ***