Bug 1010975 - host UUID xml tag is required in rebalance/remove-brick status xml output
host UUID xml tag is required in rebalance/remove-brick status xml output
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
unspecified
Unspecified Unspecified
medium Severity medium
: ---
: RHGS 2.1.2
Assigned To: Bala.FA
senaik
: ZStream
Depends On:
Blocks: 1012296
  Show dependency treegraph
 
Reported: 2013-09-23 08:29 EDT by Aravinda VK
Modified: 2015-11-22 21:57 EST (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.34.1u2rhs-1.el6rhs
Doc Type: Bug Fix
Doc Text:
Previously, the XML output of remove-brick and rebalance status commands did not contain host UUIDs of bricks in the <node> section. Host UUIDs had to be manually found by looking into the output of the 'gluster peer status' command output and match that with the volume status output. With this update, the XML outputs for rebalance and remove-brick status contain the host UUID of each node.
Story Points: ---
Clone Of:
: 1012296 (view as bug list)
Environment:
Last Closed: 2014-02-25 02:39:25 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Aravinda VK 2013-09-23 08:29:36 EDT
Description of problem:
Rebalance/Remove brick shows host as "localhost" for the node in which we run the command, for other hosts it displays the IP address. When xml output is consumed by ovirt-engine/vdsm it fails to map with actual brick host to display in UI. 

If we have brick host UUID(output of gluster system:: uuid get) in xml output, then it is easy to map the brick host. 


Steps to Reproduce:
1. gluster volume rebalance <VOLNAME> status --xml


Actual results:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volRebalance>
    <op>3</op>
    <nodeCount>1</nodeCount>
    <node>
      <nodeName>localhost</nodeName>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <status>3</status>
      <statusStr>completed</statusStr>
    </node>
    <aggregate>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <status>3</status>
      <statusStr>completed</statusStr>
    </aggregate>
  </volRebalance>
</cliOutput>

Expected results:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volRebalance>
    <op>3</op>
    <nodeCount>1</nodeCount>
    <node>
      <nodeName>localhost</nodeName>
      <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <status>3</status>
      <statusStr>completed</statusStr>
    </node>
    <aggregate>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <status>3</status>
      <statusStr>completed</statusStr>
    </aggregate>
  </volRebalance>
</cliOutput>

Additional info:
Comment 3 Pavithra 2014-01-15 08:36:22 EST
Bala,

Can you please verify the doc text for technical accuracy?
Comment 4 Pavithra 2014-01-16 01:21:10 EST
Can you please verify the doc text for technical accuracy?
Comment 5 Bala.FA 2014-01-16 04:49:52 EST
Doc text looks good to me.
Comment 6 senaik 2014-01-16 08:02:31 EST
Verified : glusterfs 3.4.0.55rhs
=========

The rebalance status xml output shows the brick host UUID in the xml output 


gluster volume rebalance DR status --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volRebalance>
    <task-id>40914141-cd23-4657-a43a-e0ebc215daeb</task-id>
    <op>3</op>
    <nodeCount>4</nodeCount>
    <node>
      <nodeName>localhost</nodeName>
      <id>24a0dfbb-d61d-4457-9050-63279a39bf94</id>
      <files>0</files>
      <size>0</size>
      <lookups>152</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <status>3</status>
      <statusStr>completed</statusStr>
      <runtime>1.00</runtime>
    </node>
    <node>
      <nodeName>10.70.37.144</nodeName>
      <id>d6d88883-2719-4d07-8209-37e89db9a22e</id>
      <files>29</files>
      <size>304087040</size>
      <lookups>179</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <status>3</status>
      <statusStr>completed</statusStr>
      <runtime>12.00</runtime>
    </node>
    <node>
 <nodeName>10.70.37.111</nodeName>
      <id>c594bc18-2b77-4224-9132-d31048d708d6</id>
      <files>0</files>
      <size>0</size>
      <lookups>152</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <status>3</status>
      <statusStr>completed</statusStr>
      <runtime>1.00</runtime>
    </node>
    <node>
      <nodeName>10.70.37.82</nodeName>
      <id>8cbf10ca-8045-4dd6-943e-322131f9916f</id>
      <files>0</files>
      <size>0</size>
      <lookups>153</lookups>
      <failures>0</failures>
      <skipped>28</skipped>
      <status>3</status>
      <statusStr>completed</statusStr>
      <runtime>2.00</runtime>
    </node>
    <aggregate>
      <files>29</files>
      <size>304087040</size>
      <lookups>636</lookups>
      <failures>0</failures>
      <skipped>28</skipped>
      <status>3</status>
      <statusStr>completed</statusStr>
      <runtime>12.00</runtime>
    </aggregate>
  </volRebalance>
</cliOutput>
Comment 8 errata-xmlrpc 2014-02-25 02:39:25 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Note You need to log in before you can comment on or make changes to this bug.