Bug 822778 - Remove-brick start does not clear the count information from status
Remove-brick start does not clear the count information from status
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: core (Show other bugs)
pre-release
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: shishir gowda
Anush Shetty
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-05-18 02:37 EDT by shylesh
Modified: 2013-12-08 20:32 EST (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 13:43:04 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions: glusterfs 3.4.0qa5
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description shylesh 2012-05-18 02:37:45 EDT
Description of problem:
Starting a remove-brick and cheking for status shows count values of previously run rebalance status, this should be cleared before starting remove-brick

Version-Release number of selected component (if applicable):
3.3.oqa41

How reproducible:


Steps to Reproduce:
1. create a single brick volume and create some files on the mount point.
2. add-brick and start rebalanc and check the status
3. now start remove-brick of newly added brick and check the status
  
Actual results:

The status output count of rebalance still persists after starting remove-brick.

Expected results:
Counters should be cleared.

Additional info:
Comment 1 shishir gowda 2012-05-18 06:27:12 EDT
The status/counters are reset fine for me:

Add Brick successful

root@shishirng:/gluster/mainline# gluster volume rebalance new start
Starting rebalance on volume new has been successful
root@shishirng:/gluster/mainline# gluster volume rebalance new status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost               72        10950          196            1    in progress

root@shishirng:/gluster/mainline# gluster volume rebalance new status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost              683        68210         2682            5      completed


after completion-remove a brick:

root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 start
Remove Brick start successful
root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost              105        41970          273            4    in progress
root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost              648        67860         1649            6    in progress
root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost             1025        71630         2667            6      completed

Can you please re-check?
Comment 2 shylesh 2012-05-20 07:52:17 EDT
This can be reproduced using multiple nodes in the cluster


[root@gqac022 mnt]# gluster v rebalance dist status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost               49    256901120          149            0      completed
                            10.16.157.66                0            0          102            0      completed

Volume Name: dist
Type: Distribute
Volume ID: 23ea4531-af5d-4309-b342-f2322a97809e
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.16.157.63:/home/bricks/d1
Brick2: 10.16.157.66:/home/bricks/d2

[root@gqac022 mnt]# gluster v remove-brick dist 10.16.157.66:/home/bricks/d2 start
Remove Brick start successful
[root@gqac022 mnt]# gluster v remove-brick dist 10.16.157.66:/home/bricks/d2 status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost               49    256901120          149            0    not started
                            10.16.157.66                0            0           52            0    in progress
Comment 3 shishir gowda 2012-05-25 00:21:39 EDT
Patch http://review.gluster.com/#change,3425 was successfully merged into mainline.
Comment 4 Amar Tumballi 2012-06-01 02:53:49 EDT
the bug fix is only in upstream, not in release-3.3. Hence moving it out of the ON_QA, and setting MODIFIED (as a standard practice @ Red Hat)
Comment 5 shylesh 2012-12-28 05:53:12 EST
Verified on 3.4.0qa5-1.el6rhs.x86_64
Now all the fields will be reset upon starting rebalance or remove-brick

Note You need to log in before you can comment on or make changes to this bug.