Hide Forgot
Created attachment 836010 [details] automation log Description of problem: rest api: rebalance causes hosts to become Non Operational after adding 4 bricks sequentially to a 6 node dist volume. * glusterd is down on each of the nodes * vdsmd is up on each of the nodes * glusterd process terminates briefly after attempts to start glusterd Version-Release number of selected component (if applicable): rhsc-cb11 How reproducible: 100% of the time Steps to Reproduce: 1. setup a 2 node cluster 2. via rest add a 6 brick dist volume and start the volume 3. via rest add 4 bricks one at a time sequentially to the volume => hosts are up and volume is fine. 4. via rest or gui start rebalance on the volume Actual results: Rebalance fails to start. Hosts are Non Operational. response given -- HTTP 400 connection failed. please check if gluster daemon is operational. Expected results: Rebalance starts successfully. Additional info:
Created attachment 836012 [details] vdsm.log
Created attachment 836013 [details] engine.log
adding all bricks at once for step 4 in comment#0 also results in hosts becoming Non Operational.
correction to comment#4 s/step 4/step 3/
step 3 doesn't need to run to cause the the hosts to go non operational.
The issue seems to be with the volume name "StartMigrationDuringRebalanceTest". Reopening this bug as a glusterfs bug.