Description of problem: ======================== After doing add brick and checking rebalance status after restarting glusterd gives error - "volume rebalance: <vol_name>: failed: error" first time . However checking rebalance status after that gives the expected output. Version-Release number of selected component (if applicable): ============================================================= glusterfs 3.4.0.52rhs How reproducible: ================ quite often Steps to Reproduce: ================== 1.Create a distribute replicate volume and start it 2.Mount the volume and create some files 3.Add 2 bricks and start rebalance 4.Check rebalance status 5.Add 2 more bricks and check rebalance status 6.Restart glusterd and check rebalance status again [root@jay brick1]# service glusterd restart Stopping glusterd: [ OK ] Starting glusterd: [ OK ] [root@jay brick1]# gluster v rebalance vol1 status volume rebalance: vol1: failed: error 7.Check rebalance status again gluster v rebalance vol1 status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 19 19.0MB 69 0 0 completed 1.00 10.70.34.88 0 0Bytes 53 0 7 completed 0.00 10.70.34.87 0 0Bytes 52 0 0 completed 0.00 10.70.34.89 0 0Bytes 53 0 0 completed 0.00 volume rebalance: vol1: success: Actual results: =============== Checking rebalance status immediately after restarting glusterd gives error volume rebalance: <volP_name>: failed: error Expected results: ================ After restarting glusterd and checking rebalance status should not give error Additional info:
sosreports : http://rhsqe-repo.lab.eng.blr.redhat.com/bugs_necessary_info/1046879/
Cloning this to 3.1. to be fixed in future release.