Description of problem: Failed rebalance operation started automatically when glusterd is restarted , by giving the event message as "Detected start of rebalance on volume <volName> of Cluster <clusterName> from CLI." Version-Release number of selected component (if applicable): rhsc-2.1.2-0.32.el6rhs.noarch How reproducible: Always Steps to Reproduce: 1. Create 10 distribute volumes . 2. Start rebalance on all the volumes at once. 3. Once rebalance is started on one volume, click on the status button, it fails to fetch status details. 4. On the same volume try fetching volume advanced details, it fails. 5. Now rebalance on the all the remaining volumes fails by giving an event message "Could not start Gluster Volume <volName> rebalance." 6. Now restart glusterd on all the nodes. Actual results: Rebalance starts automatically on the volumes where it failed by giving an event message "Detected start of rebalance on volume <volName> of Cluster <clusterName> from CLI". Expected results: Rebalance should not be started automatically when glusterd is restarted. Additional info:
Created attachment 847112 [details] engine logs
Please review the edited DocText and signoff.
I am seeing this issue very often in my local config. Following are the steps i performed: 1) Created 2 distribue (say vol_dis and vol_dis1) and 2 distribute replicate volumes (say vol_dis_rep and vol_dis_rep1) using 4 RHS servers. 2) Now start rebalance on vol_dis and vol_dis1. 3) Bring down glusterd in one of the node and stop rebalance on vol_dis. 4) Rebalance is stopped and the status icon gets updated with rebalance stopped icon. 5) Now bring back glusterd in the node where it was stopped. 6) Now rebalance starts automatically in the volume vol_dis_rep, with an event message "Detected start of rebalance on volume vol_dis_rep of Cluster cluster_regress from CLI"
sos reports are attached here: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/rhsc/1049890/
doc text looks fine
Seems like a gluster issue and not an RHSC issue. Closing this as CANTFIX here. Please log a bug in gluster if required