Bug 1029239 - RFE: Rebalance information should include volume name and brick specific information
Summary: RFE: Rebalance information should include volume name and brick specific info...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: All
OS: Unspecified
medium
low
Target Milestone: ---
Assignee: Susant Kumar Palai
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-12 00:02 UTC by purpleidea
Modified: 2018-10-24 10:17 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-24 10:17:51 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description purpleidea 2013-11-12 00:02:22 UTC
Description of problem:

Sorry if this is assigned to wrong component. I wasn't 100% sure which is right.

I'll copy this directly from the mailing list...

in the command: gluster volume rebalance myvolume status ;
gluster volume rebalance myvolume status --xml && echo t
No where does it mention the volume, or the specific bricks which are
being [re-]balanced. In particular, a volume name would be especially
useful in the --xml output.

This would be useful if multiple rebalances are going on... I realize
this is because the rebalance command only allows you to specify one
volume at a time, but to be consistent with other commands, a volume
rebalance status command should let you get info on many volumes.

Also, still missing per brick information.



                                    Node Rebalanced-files          size
scanned      failures       skipped         status run time in secs
                               ---------      -----------   -----------
-----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes
2             0             0    in progress             1.00
                        vmx2.example.com                0        0Bytes
7             0             0    in progress             1.00
volume rebalance: examplevol: success:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>115</opErrno>
  <opErrstr/>
  <volRebalance>
    <task-id>c5e9970b-f96a-4a28-af14-5477cf90d638</task-id>
    <op>3</op>
    <nodeCount>2</nodeCount>
    <node>
      <nodeName>localhost</nodeName>
      <files>0</files>
      <size>0</size>
      <lookups>2</lookups>
      <failures>0</failures>
      <status>1</status>
      <statusStr>in progress</statusStr>
    </node>
    <node>
      <nodeName>vmx2.example.com</nodeName>
      <files>0</files>
      <size>0</size>
      <lookups>7</lookups>
      <failures>0</failures>
      <status>1</status>
      <statusStr>in progress</statusStr>
    </node>
    <aggregate>
      <files>0</files>
      <size>0</size>
      <lookups>9</lookups>
      <failures>0</failures>
      <status>1</status>
      <statusStr>in progress</statusStr>
    </aggregate>
  </volRebalance>
</cliOutput>

Kaushal's reply was:

> Having the volume name in xml output is a valid enhancement. Go ahead
> and open a RFE bug for it.
> The rebalance process on each node crawls the whole volume to find
> files which need to be migrated and which are present on bricks of the
> volume belonging to that node. So the rebalance status of a node can
> be considered the status of the brick. But if a node contains more
> than one brick of the volume being rebalanced, we don't have a way to
> differentiate, and I'm not sure if we could do that.
> 


So here's the bug.
Cheers!


Version-Release number of selected component (if applicable):
gluster --version
glusterfs 3.4.1 built on Sep 27 2013 13:13:58

How reproducible:
100%


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Copied from mailing list.

Comment 1 Niels de Vos 2014-11-27 14:45:19 UTC
Feature requests make most sense against the 'mainline' release, there is no ETA for an implementation and requests might get forgotten when filed against a particular version.

Comment 2 Amar Tumballi 2018-10-24 10:17:51 UTC
Not planning to work on this with glusterd(v1) anymore! The update on gd2 is, that it would be a URL on which you would get status, and json involves these keys.


Note You need to log in before you can comment on or make changes to this bug.