Description of problem: There were huge number of files in the volume (more than a million and were still being created) when add brick was done to a pure replicate volume (making it 2x2 distributed replicate). Then rebalance is issued on the volume. After hours when status of the rebalance is checked this is what was printed. gluster volume geo-replication mirror 10.16.156.18:/export/gsync status MASTER SLAVE STATUS -------------------------------------------------------------------------------- mirror 10.16.156.18:/export/gsync OK [root@gqas004 scripts]# gluster volume rebalance mirror status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 0 0 16740 8412 failed 10.16.156.12 54952747465353020418916371079099919006-7049732762775798405 2065 completed 10.16.156.15 -48813322292539186504365440375079349386-8454170895283874500 2065 completed 10.16.156.18 0 0 2579125 0 completed i.e. negative numbers are printed. But the information regarding rebalance such as number of files rebalanced, total size that was migrated, number of failures while rebalancing, total lookups etc cannot be -ve. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
fixed by patch @ http://review.gluster.com/3608