Bug 982104
| Summary: | Rebalance Status message not showing correct status . | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | senaik | |
| Component: | glusterfs | Assignee: | Kaushal <kaushal> | |
| Status: | CLOSED ERRATA | QA Contact: | senaik | |
| Severity: | medium | Docs Contact: | ||
| Priority: | high | |||
| Version: | 2.1 | CC: | barumuga, dpati, grajaiya, kaushal, psriniva, rhs-bugs, sabose, vbellur, vraman | |
| Target Milestone: | --- | Keywords: | ZStream | |
| Target Release: | RHGS 2.1.2 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.4.0.44.1u2rhs | Doc Type: | Bug Fix | |
| Doc Text: |
Previously, the add-brick command would reset the rebalance status. As a result, the 'rebalance status' command displayed a wrong status.
With this fix, 'rebalance status' command works as expected.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1006247 (view as bug list) | Environment: | ||
| Last Closed: | 2014-02-25 07:32:46 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1006247 | |||
| Bug Blocks: | ||||
|
Description
senaik
2013-07-08 07:19:29 UTC
Another related issue,
When gluster rebalance is completed, and a brick added to volume after completion, the gluster volume status all gives incorrect output:
[root@localhost ~]# gluster volume status all
Status of volume: dv1
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.42.152:/brcks/dvb1 49156 Y 11341
Brick 10.70.42.152:/brcks/dvb2 49157 Y 11351
Brick 10.70.42.152:/brcks/dvb3 49158 Y 27642
NFS Server on localhost 2049 Y 27652
Task ID Status
---- -- ------
Rebalance 4a96c34d-fe5e-48b3-b349-80621abb85f3 0
Here, that taskid was for the previous rebalance operation.
The status is incorrectly shown as "Not Started"
This affects the task monitoring in RHSC. Can this be fixed, please?
RHSC has dependency on this bug. Hence moving the priority to High Version : glusterfs 3.4.0.52rhs ======= Repeated the steps as mentioned in Steps to reproduce . After adding brick once rebalance is completed and checking Rebalance Status shows the below output: [root@jay tmp]# gluster v add-brick vol3 10.70.34.89:/rhs/brick1/c7 10.70.34.87:/rhs/brick1/c8 volume add-brick: success [root@jay tmp]# gluster v rebalance vol3 status Node Rebalanced-files size scanned failures skipped status run time in secs ---- -------------- ------ -------- -------- ------- ------ ---------------- localhost 0 0Bytes 53 0 2 completed 0.00 10.70.34.88 0 0Bytes 54 0 2 completed 0.00 10.70.34.87 15 15.0MB 60 0 0 completed 1.00 10.70.34.89 6 6.0MB 61 0 0 completed 0.00 volume rebalance: vol3: success: Checking gluster volume status all shows the Rebalance Task is "Completed" with the Task ID same as when Rebalance Operation was started [root@jay tmp]# gluster v rebalance vol4 start volume rebalance: vol4: success: Starting rebalance on volume vol4 has been successful. ID: ffc8ce22-6dc4-43a3-b9df-b30d54e18646 Status of volume: vol4 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.34.86:/rhs/brick1/d1 49159 Y 14375 Brick 10.70.34.87:/rhs/brick1/d2 49166 Y 7937 Brick 10.70.34.88:/rhs/brick1/d3 49155 Y 8617 Brick 10.70.34.89:/rhs/brick1/d4 49160 Y 22961 Brick 10.70.34.87:/rhs/brick1/d5 49167 Y 7973 Brick 10.70.34.89:/rhs/brick1/d6 49161 Y 23005 Brick 10.70.34.87:/rhs/brick1/d7 49168 Y 8021 NFS Server on localhost 2049 Y 14535 NFS Server on 10.70.34.88 2049 Y 8693 NFS Server on 10.70.34.87 2049 Y 8039 NFS Server on 10.70.34.89 2049 Y 23017 Task Status of Volume vol4 ------------------------------------------------------------------------------ Task : Rebalance ID : ffc8ce22-6dc4-43a3-b9df-b30d54e18646 Status : completed Marking the bug as 'Verified' Can you please verify if the doc text is technically correct? The doc text looks fine. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html |