Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1049890 - [RHSC] - Failed rebalance operation started automatically when glusterd is restarted.
[RHSC] - Failed rebalance operation started automatically when glusterd is r...
Status: CLOSED CANTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
2.1
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Shubhendu Tripathi
RamaKasturi
:
Depends On:
Blocks: 1035040
  Show dependency treegraph
 
Reported: 2014-01-08 07:45 EST by RamaKasturi
Modified: 2015-03-26 02:13 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
When gluster daemon service is restarted, failed rebalance is started automatically and the status is displayed as 'Started' in the Red Hat Storage Console.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-03-26 02:13:54 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
engine logs (7.16 MB, text/x-log)
2014-01-08 07:56 EST, Shruti Sampat
no flags Details

  None (edit)
Description RamaKasturi 2014-01-08 07:45:52 EST
Description of problem:
Failed rebalance operation started automatically when glusterd is restarted , by giving the event message as "Detected start of rebalance on volume <volName> of Cluster <clusterName> from CLI."

Version-Release number of selected component (if applicable):
rhsc-2.1.2-0.32.el6rhs.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create 10 distribute volumes .
2. Start rebalance on all the volumes at once.
3. Once rebalance is started on one volume, click on the status button, it fails to fetch status details.
4. On the same volume try fetching volume advanced details, it fails.
5. Now rebalance on the all the remaining volumes fails by giving an event message "Could not start Gluster Volume <volName> rebalance."
6. Now restart glusterd on all the nodes.

Actual results:
Rebalance starts automatically on the volumes where it failed by giving an event message "Detected start of rebalance on volume <volName> of Cluster <clusterName> from CLI".

Expected results:
Rebalance should not be started automatically when glusterd is restarted.

Additional info:
Comment 1 Shruti Sampat 2014-01-08 07:56:15 EST
Created attachment 847112 [details]
engine logs
Comment 4 Shalaka 2014-01-23 04:15:26 EST
Please review the edited DocText and signoff.
Comment 5 RamaKasturi 2014-01-24 01:55:35 EST
I am seeing this issue very often in my local config. Following are the steps i performed:

1) Created 2 distribue (say vol_dis and vol_dis1) and 2 distribute replicate volumes (say vol_dis_rep and vol_dis_rep1) using 4 RHS servers.

2) Now start rebalance on vol_dis and vol_dis1.

3) Bring down glusterd in one of the node and stop rebalance on vol_dis.

4) Rebalance is stopped and the status icon gets updated with rebalance stopped icon.

5) Now bring back glusterd in the node where it was stopped.

6) Now rebalance starts automatically in the volume vol_dis_rep, with  an event message  "Detected start of rebalance on volume vol_dis_rep of Cluster cluster_regress from CLI"
Comment 6 RamaKasturi 2014-01-24 02:40:57 EST
sos reports are attached here:

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/rhsc/1049890/
Comment 7 Shubhendu Tripathi 2014-01-27 00:00:47 EST
doc text looks fine
Comment 8 Sahina Bose 2015-02-17 05:06:25 EST
Seems like a gluster issue and not an RHSC issue. Closing this as CANTFIX here. Please log a bug in gluster if required

Note You need to log in before you can comment on or make changes to this bug.