Bug 1019846
Summary: | Rebalance : Status command shows the node which does not participate in rebalance. | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | RamaKasturi <knarra> | |
Component: | glusterfs | Assignee: | Kaushal <kaushal> | |
Status: | CLOSED ERRATA | QA Contact: | senaik | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | 2.1 | CC: | dpati, dtsang, grajaiya, kaushal, knarra, mmahoney, pprakash, psriniva, ssampat, vagarwal, vbellur, vmallika | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 2.1.2 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.4.0.44.1u2rhs | Doc Type: | Bug Fix | |
Doc Text: |
Previously, the rebalance status command would also display peers that did not have any associated bricks. With this fix, the rebalance status command works as expected.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1031887 (view as bug list) | Environment: | ||
Last Closed: | 2014-02-25 07:52:52 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1031887 | |||
Bug Blocks: |
Description
RamaKasturi
2013-10-16 13:38:34 UTC
Remove brick status also has the exact same issue like rebalance. Probably they are all related. *** Bug 1034643 has been marked as a duplicate of this bug. *** Verison : 3.4.0.44.1u2rhs The node which is not participating in rebalance does not come up in the status output . Steps : ====== gluster peer status Number of Peers: 3 Hostname: 10.70.34.88 Uuid: b88e9ce9-504f-45ba-8a71-206c3c6df6f9 State: Peer in Cluster (Connected) Hostname: 10.70.34.85 Uuid: b2461aa6-24bb-4d70-b43b-d6a73ab84698 State: Peer in Cluster (Connected) Hostname: 10.70.34.87 Uuid: 2972dc01-4c8b-4d73-916b-8797a65a4e51 State: Peer in Cluster (Connected) 1) create a distribute /distributed replicate volume with 3 bricks and start it such that there are no bricks on one of the peers gluster v create vol3 10.70.34.86:/rhs/brick1/d1 10.70.34.88:/rhs/brick1/d2 10.70.34.85:/rhs/brick1/d3 volume create: vol3: success: please start the volume to access data [root@boost c3]# gluster v start vol3 volume start: vol3: success 2) Fuse /NFS mount the volume and create some files 3) Add bricks to the volume and start rebalance gluster v add-brick vol3 10.70.34.86:/rhs/brick1/d4 10.70.34.88:/rhs/brick1/d5 volume add-brick: success 4) check rebalance status [root@boost d3]# gluster v rebalance vol3 status Node Rebalanced-files size scanned failures skipped status run time in secs localhost 0 0Bytes 106 0 13 completed 0.00 10.70.34.88 0 0Bytes 106 0 15 completed 0.00 10.70.34.86 10 10.0MB 110 0 0 completed 1.00 volume rebalance: vol3: success: node which is not participating in rebalance is not shown in rebalance status . 5) restarted glusterd and checked rebalance status again , output is the same Moving the bug to 'Verified' state Please verify the doc text for technical accuracy. The doc text looks okay. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html |