Bug 822778 - Remove-brick start does not clear the count information from status
Summary: Remove-brick start does not clear the count information from status
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: pre-release
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: shishir gowda
QA Contact: Anush Shetty
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-05-18 06:37 UTC by shylesh
Modified: 2013-12-09 01:32 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-24 17:43:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions: glusterfs 3.4.0qa5
Embargoed:


Attachments (Terms of Use)

Description shylesh 2012-05-18 06:37:45 UTC
Description of problem:
Starting a remove-brick and cheking for status shows count values of previously run rebalance status, this should be cleared before starting remove-brick

Version-Release number of selected component (if applicable):
3.3.oqa41

How reproducible:


Steps to Reproduce:
1. create a single brick volume and create some files on the mount point.
2. add-brick and start rebalanc and check the status
3. now start remove-brick of newly added brick and check the status
  
Actual results:

The status output count of rebalance still persists after starting remove-brick.

Expected results:
Counters should be cleared.

Additional info:

Comment 1 shishir gowda 2012-05-18 10:27:12 UTC
The status/counters are reset fine for me:

Add Brick successful

root@shishirng:/gluster/mainline# gluster volume rebalance new start
Starting rebalance on volume new has been successful
root@shishirng:/gluster/mainline# gluster volume rebalance new status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost               72        10950          196            1    in progress

root@shishirng:/gluster/mainline# gluster volume rebalance new status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost              683        68210         2682            5      completed


after completion-remove a brick:

root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 start
Remove Brick start successful
root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost              105        41970          273            4    in progress
root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost              648        67860         1649            6    in progress
root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost             1025        71630         2667            6      completed

Can you please re-check?

Comment 2 shylesh 2012-05-20 11:52:17 UTC
This can be reproduced using multiple nodes in the cluster


[root@gqac022 mnt]# gluster v rebalance dist status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost               49    256901120          149            0      completed
                            10.16.157.66                0            0          102            0      completed

Volume Name: dist
Type: Distribute
Volume ID: 23ea4531-af5d-4309-b342-f2322a97809e
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.16.157.63:/home/bricks/d1
Brick2: 10.16.157.66:/home/bricks/d2

[root@gqac022 mnt]# gluster v remove-brick dist 10.16.157.66:/home/bricks/d2 start
Remove Brick start successful
[root@gqac022 mnt]# gluster v remove-brick dist 10.16.157.66:/home/bricks/d2 status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost               49    256901120          149            0    not started
                            10.16.157.66                0            0           52            0    in progress

Comment 3 shishir gowda 2012-05-25 04:21:39 UTC
Patch http://review.gluster.com/#change,3425 was successfully merged into mainline.

Comment 4 Amar Tumballi 2012-06-01 06:53:49 UTC
the bug fix is only in upstream, not in release-3.3. Hence moving it out of the ON_QA, and setting MODIFIED (as a standard practice @ Red Hat)

Comment 5 shylesh 2012-12-28 10:53:12 UTC
Verified on 3.4.0qa5-1.el6rhs.x86_64
Now all the fields will be reset upon starting rebalance or remove-brick


Note You need to log in before you can comment on or make changes to this bug.