Bug 1029239
Summary: | RFE: Rebalance information should include volume name and brick specific information | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | purpleidea |
Component: | core | Assignee: | Susant Kumar Palai <spalai> |
Status: | CLOSED WONTFIX | QA Contact: | |
Severity: | low | Docs Contact: | |
Priority: | medium | ||
Version: | mainline | CC: | atumball, bugs, purpleidea, smohan |
Target Milestone: | --- | Keywords: | FutureFeature, Triaged |
Target Release: | --- | ||
Hardware: | All | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Enhancement | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-10-24 10:17:51 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Feature requests make most sense against the 'mainline' release, there is no ETA for an implementation and requests might get forgotten when filed against a particular version. Not planning to work on this with glusterd(v1) anymore! The update on gd2 is, that it would be a URL on which you would get status, and json involves these keys. |
Description of problem: Sorry if this is assigned to wrong component. I wasn't 100% sure which is right. I'll copy this directly from the mailing list... in the command: gluster volume rebalance myvolume status ; gluster volume rebalance myvolume status --xml && echo t No where does it mention the volume, or the specific bricks which are being [re-]balanced. In particular, a volume name would be especially useful in the --xml output. This would be useful if multiple rebalances are going on... I realize this is because the rebalance command only allows you to specify one volume at a time, but to be consistent with other commands, a volume rebalance status command should let you get info on many volumes. Also, still missing per brick information. Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 2 0 0 in progress 1.00 vmx2.example.com 0 0Bytes 7 0 0 in progress 1.00 volume rebalance: examplevol: success: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>115</opErrno> <opErrstr/> <volRebalance> <task-id>c5e9970b-f96a-4a28-af14-5477cf90d638</task-id> <op>3</op> <nodeCount>2</nodeCount> <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>2</lookups> <failures>0</failures> <status>1</status> <statusStr>in progress</statusStr> </node> <node> <nodeName>vmx2.example.com</nodeName> <files>0</files> <size>0</size> <lookups>7</lookups> <failures>0</failures> <status>1</status> <statusStr>in progress</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>9</lookups> <failures>0</failures> <status>1</status> <statusStr>in progress</statusStr> </aggregate> </volRebalance> </cliOutput> Kaushal's reply was: > Having the volume name in xml output is a valid enhancement. Go ahead > and open a RFE bug for it. > The rebalance process on each node crawls the whole volume to find > files which need to be migrated and which are present on bricks of the > volume belonging to that node. So the rebalance status of a node can > be considered the status of the brick. But if a node contains more > than one brick of the volume being rebalanced, we don't have a way to > differentiate, and I'm not sure if we could do that. > So here's the bug. Cheers! Version-Release number of selected component (if applicable): gluster --version glusterfs 3.4.1 built on Sep 27 2013 13:13:58 How reproducible: 100% Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: Copied from mailing list.