Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1066130 - simultaneous start rebalance only starts rebalance for one volume for volumes made up of 16 hosts
simultaneous start rebalance only starts rebalance for one volume for volumes...
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
2.1
Unspecified Unspecified
low Severity unspecified
: ---
: ---
Assigned To: Sahina Bose
RHS-C QE
:
Depends On:
Blocks: 1035040
  Show dependency treegraph
 
Reported: 2014-02-17 14:23 EST by Dustin Tsang
Modified: 2015-12-03 12:13 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
Simultaneous start of rebalance on volumes that span same set of hosts fails as glusterd lock is acquired on participating hosts. Workaround: Start Rebalance again on the other volume after the process starts on first volume.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:13:38 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Dustin Tsang 2014-02-17 14:23:39 EST
Description of problem:

On volumes made up of bricks on 16 distinct hosts, starting rebalanace on 3 volumes simultaneously fails on 2 out 3 of the volumes.



Version-Release number of selected component (if applicable):
rhsc-cb17

How reproducible:
every time

Steps to Reproduce:
1. create 3 volumes which has bricks on 16 distinct hosts
2. ctrl click to select 3 hosts in the main volume tab
3. right click on on of the selected hosts and choose rebalance from the context menu


Actual results:
'could not start Gluster volume x rebalance' event reported for 2 of the volumes

rhsc-log-collector logs: http://rhsqe-repo.lab.eng.blr.redhat.com/dustin/sosreport-LogCollector-20140217141607.tar.xz

Expected results:
rebalance starts on all 3 volumes

Additional info:
Comment 2 Sahina Bose 2014-02-18 01:21:03 EST
From vdsm logs, there's an exception

on vol_three
GlusterVolumeRebalanceStartFailedException: <unprintable GlusterVolumeRebalanceStartFailedException object>

on vol_dis
GlusterVolumeRebalanceStartFailedException: Volume rebalance start failed
error: Another transaction could be in progress. Please try again after sometime.
return code: 61

I'm not sure which prior operation on volume has obtained lock - which causes this. Will need help from gluster team to analyse.
Comment 3 Sahina Bose 2014-02-19 01:26:36 EST
If all 3 volumes being started span the same set of hosts, simultaneous start fails as the gluster CLI obtains a lock on the host for the start rebalance command.
Comment 4 Shalaka 2014-02-19 05:57:36 EST
Edited the doc text as discussed.
Comment 6 Vivek Agarwal 2015-12-03 12:13:38 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.