Bug 1066130
Summary: | simultaneous start rebalance only starts rebalance for one volume for volumes made up of 16 hosts | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Dustin Tsang <dtsang> |
Component: | rhsc | Assignee: | Sahina Bose <sabose> |
Status: | CLOSED EOL | QA Contact: | RHS-C QE <rhsc-qe-bugs> |
Severity: | unspecified | Docs Contact: | |
Priority: | low | ||
Version: | 2.1 | CC: | asriram, knarra, mmccune, nlevinki, rhs-bugs, sdharane, ssampat |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Known Issue | |
Doc Text: |
Simultaneous start of rebalance on volumes that span same set of hosts fails as glusterd lock is acquired on participating hosts.
Workaround: Start Rebalance again on the other volume after the process starts on first volume.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2015-12-03 17:13:38 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1035040 |
Description
Dustin Tsang
2014-02-17 19:23:39 UTC
From vdsm logs, there's an exception on vol_three GlusterVolumeRebalanceStartFailedException: <unprintable GlusterVolumeRebalanceStartFailedException object> on vol_dis GlusterVolumeRebalanceStartFailedException: Volume rebalance start failed error: Another transaction could be in progress. Please try again after sometime. return code: 61 I'm not sure which prior operation on volume has obtained lock - which causes this. Will need help from gluster team to analyse. If all 3 volumes being started span the same set of hosts, simultaneous start fails as the gluster CLI obtains a lock on the host for the start rebalance command. Edited the doc text as discussed. Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release. |