Bug 822778
Summary: | Remove-brick start does not clear the count information from status | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | shylesh <shmohan> |
Component: | core | Assignee: | shishir gowda <sgowda> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Anush Shetty <ashetty> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | pre-release | CC: | gluster-bugs, nsathyan, ujjwala |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.4.0 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2013-07-24 17:43:04 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | glusterfs 3.4.0qa5 | Category: | --- |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
shylesh
2012-05-18 06:37:45 UTC
The status/counters are reset fine for me: Add Brick successful root@shishirng:/gluster/mainline# gluster volume rebalance new start Starting rebalance on volume new has been successful root@shishirng:/gluster/mainline# gluster volume rebalance new status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 72 10950 196 1 in progress root@shishirng:/gluster/mainline# gluster volume rebalance new status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 683 68210 2682 5 completed after completion-remove a brick: root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 start Remove Brick start successful root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 105 41970 273 4 in progress root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 648 67860 1649 6 in progress root@shishirng:/gluster/mainline# gluster volume remove-brick new sng:/export/dir2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 1025 71630 2667 6 completed Can you please re-check? This can be reproduced using multiple nodes in the cluster [root@gqac022 mnt]# gluster v rebalance dist status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 49 256901120 149 0 completed 10.16.157.66 0 0 102 0 completed Volume Name: dist Type: Distribute Volume ID: 23ea4531-af5d-4309-b342-f2322a97809e Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 10.16.157.63:/home/bricks/d1 Brick2: 10.16.157.66:/home/bricks/d2 [root@gqac022 mnt]# gluster v remove-brick dist 10.16.157.66:/home/bricks/d2 start Remove Brick start successful [root@gqac022 mnt]# gluster v remove-brick dist 10.16.157.66:/home/bricks/d2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 49 256901120 149 0 not started 10.16.157.66 0 0 52 0 in progress Patch http://review.gluster.com/#change,3425 was successfully merged into mainline. the bug fix is only in upstream, not in release-3.3. Hence moving it out of the ON_QA, and setting MODIFIED (as a standard practice @ Red Hat) Verified on 3.4.0qa5-1.el6rhs.x86_64 Now all the fields will be reset upon starting rebalance or remove-brick |