Description of problem: Starting a remove-brick and cheking for status shows count values of previously run rebalance status, this should be cleared before starting remove-brick Version-Release number of selected component (if applicable): 3.3.oqa41 How reproducible: Steps to Reproduce: 1. create a single brick volume and create some files on the mount point. 2. add-brick and start rebalanc and check the status 3. now start remove-brick of newly added brick and check the status Actual results: The status output count of rebalance still persists after starting remove-brick. Expected results: Counters should be cleared. Additional info:
The status/counters are reset fine for me: Add Brick successful root@shishirng:/gluster/mainline# gluster volume rebalance new start Starting rebalance on volume new has been successful root@shishirng:/gluster/mainline# gluster volume rebalance new status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 72 10950 196 1 in progress root@shishirng:/gluster/mainline# gluster volume rebalance new status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 683 68210 2682 5 completed after completion-remove a brick: root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 start Remove Brick start successful root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 105 41970 273 4 in progress root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 648 67860 1649 6 in progress root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 1025 71630 2667 6 completed Can you please re-check?
This can be reproduced using multiple nodes in the cluster [root@gqac022 mnt]# gluster v rebalance dist status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 49 256901120 149 0 completed 10.16.157.66 0 0 102 0 completed Volume Name: dist Type: Distribute Volume ID: 23ea4531-af5d-4309-b342-f2322a97809e Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 10.16.157.63:/home/bricks/d1 Brick2: 10.16.157.66:/home/bricks/d2 [root@gqac022 mnt]# gluster v remove-brick dist 10.16.157.66:/home/bricks/d2 start Remove Brick start successful [root@gqac022 mnt]# gluster v remove-brick dist 10.16.157.66:/home/bricks/d2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 49 256901120 149 0 not started 10.16.157.66 0 0 52 0 in progress
Patch http://review.gluster.com/#change,3425 was successfully merged into mainline.
the bug fix is only in upstream, not in release-3.3. Hence moving it out of the ON_QA, and setting MODIFIED (as a standard practice @ Red Hat)
Verified on 3.4.0qa5-1.el6rhs.x86_64 Now all the fields will be reset upon starting rebalance or remove-brick