Bug 979376
Summary: | Rebalance : Gluster volume remove brick status displays 2 entries for one host | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | senaik |
Component: | glusterfs | Assignee: | Kaushal <kaushal> |
Status: | CLOSED ERRATA | QA Contact: | senaik |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 2.1 | CC: | dpati, kaushal, psriniva, rhs-bugs, vagarwal, vbellur, vraman |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 2.1.2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.4.0.47.1u2rhs | Doc Type: | Bug Fix |
Doc Text: |
Previously, when one of the hosts in a cluster was restarted, the remove-brick status command displayed two entries for the same host. With this fix, the command works as expected.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2014-02-25 07:32:18 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1015045 |
Description
senaik
2013-06-28 11:14:28 UTC
Kaushal, can you see if its still the issue ? I don't remember seeing this in any of the recent builds. If not seen, can you move the bug to ON_QA? Changes made for bug 1019846 fix this issue. Moving to ON_QA. Can you please verify the doc text for technical accuracy? The doc text looks fine. Version : glusterfs 3.4.0.55rhs ======= Previously when one of the nodes was rebooted while remove brick status was in progress and when remove brick status was checked, it showed local host twice. Now only the node where the brick was removed is shown . In my opinion , when a remove brick operation is started , it should show the nodes FROM where the data is moving TO which node. So the SOURCE and DESTINATION nodes should be shown . Steps : ===== Created a dist volume with 3 bricks and start it Mount the file and create some files Remove brick gluster volume remove-brick dist1 10.70.37.111:/rhs/brick1/e1 start volume remove-brick start: success ID: 1e6763f0-4f68-41b1-8bda-786befc80a8a [root@boo ~]# gluster volume remove-brick dist1 10.70.37.111:/rhs/brick1/e1 status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- 10.70.37.111 16 160.0MB 17 0 0 in progress 4.00 Could you please clarify on this . When bricks are removed, the data on it is rebalanced onto the remaining bricks, so there is no exact destination. This is unlike a replace brick where we have explicit source and destination. For both the processes, the destinations are passive whereas the source is active. The destinations do not need to do anything other than have a running brick. All the work is done by the source, so it is the one collecting the stats for the procedure. In the destinations view, they are just serving requests of another client. Since, only the source contains information specific to the process (rebalance/remove-brick/replace-brick), the status command only give information from the source. As per comment 3 in https://bugzilla.redhat.com/show_bug.cgi?id=1030932 , layout changes for existing directories for remove brick and it migrates data from the non decommissioned bricks as well, so in this case shouldn't we be showing all the nodes present in the status ? A rebalance process should only be concerned with migrating data from those bricks of the volume which are present on the peer on which the rebalance process is running. In case of remove-brick, the rebalance processes will be launched only on those peers which contained the bricks, so they should only be migrating data from the remove-bricks. But, if those peers also contain other bricks belonging to the volume, it appears that the rebalance processes will also rebalance the data on those bricks (this is incorrect IMO, which is what bug-1030932 implies). But even then, the processes will only be launched on the peers containing the bricks being removed, so the output of the status command is still correct. Version : glusterfs 3.4.0.55rhs ======= Previously when one of the nodes was rebooted while remove brick status was in progress and when remove brick status was checked, it showed local host twice. Now only the node where the brick was removed is shown . Marking the bug 'Verified' Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html |