Description of problem: triggering rebalance for a distributed volume and checking the status says "not started" but actually data is being migrated Version-Release number of selected component (if applicable): glusterfs-server-3.3.0.9rhs-1.el6rhs.x86_64 How reproducible: Steps to Reproduce: 1.Adding a brick to distribute volume and triggering rebalance will hit this issue Actual results: [root@rhsauto018 ~]# gluster v info dist Volume Name: dist Type: Distribute Volume ID: 0130dae0-0573-491b-a4b2-14ac872624e7 Status: Started Number of Bricks: 13 Transport-type: tcp Bricks: Brick1: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick2 Brick2: rhsauto038.lab.eng.blr.redhat.com:/rhs/brick2 Brick3: rhsauto031.lab.eng.blr.redhat.com:/rhs/brick2 Brick4: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick5 Brick5: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick4/dist1 Brick6: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick4/dist2 Brick7: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick4/dist3 Brick8: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick4/dist4 Brick9: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick4/dis5 Brick10: rhsauto031.lab.eng.blr.redhat.com:/rhs/brick4/dist5 Brick11: rhsauto031.lab.eng.blr.redhat.com:/rhs/brick4/dist4 Brick12: rhsauto038.lab.eng.blr.redhat.com:/rhs/brick4/dist4 Brick13: rhsauto038.lab.eng.blr.redhat.com:/rhs/brick4/dist5 [root@rhsauto018 ~]# gluster volume rebalance dist start force Starting rebalance on volume dist has been successful [root@rhsauto018 rpm]# gluster volume rebalance dist status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 11016 1912543601 29571 0 not started 10.70.37.13 4060 530579456 24869 0 not started rhsauto031.lab.eng.blr.redhat.com 3775 549453824 26950 0 not started [root@rhsauto038 rpm]# gluster volume rebalance dist status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 4060 530579456 24869 0 not started rhsauto031.lab.eng.blr.redhat.com 3775 549453824 26950 0 not started rhsauto018.lab.eng.blr.redhat.com 11016 1912543601 29571 0 not started [root@rhsauto031 rpm]# gluster volume rebalance dist status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 3775 549453824 26950 0 not started 10.70.37.13 4060 530579456 24869 0 not started rhsauto018.lab.eng.blr.redhat.com 11016 1912543601 29571 0 not started RHS servers ========== rhsauto18.lab.eng.blr.redhat.com rhsauto31.lab.eng.blr.redhat.com rhsauto38.lab.eng.blr.redhat.com client ======== rhsauto27.lab.eng.blr.rehdat.com mount point path ================ /mnt/dist attached the sosreport
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version. [1] https://rhn.redhat.com/errata/RHSA-2014-0821.html