Hide Forgot
Description of problem: ====================== Rebalance start is happening with out checking all the volume bricks are up and rebalance status is showing failed, which is expected one. Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.7.9-12.el7rhgs.x86_64 How reproducible: ================= Always Steps to Reproduce: ==================== 1. Have one or two node cluster 2. Create a simple distribute volume using 2 bricks and start it 3. Fuse mount the volume 4. Kill one of the volume brick 5. write enough data on the mount point //untar the kernel 6. Now trigger the rebalance // gluster volume rebalance <vol-name> start --> This will get succeeded 7. Check for rebalance status // will show failure, which is expected Actual results: =============== Rebalance start is happening when volume bricks are down Expected results: ================== Rebalance start should check all volume bricks are up and should through proper error message Additional info: