Bug 1010975 - host UUID xml tag is required in rebalance/remove-brick status xml output
Summary: host UUID xml tag is required in rebalance/remove-brick status xml output
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs
Version: unspecified
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: RHGS 2.1.2
Assignee: Bala.FA
QA Contact: senaik
URL:
Whiteboard:
Depends On:
Blocks: 1012296
TreeView+ depends on / blocked
 
Reported: 2013-09-23 12:29 UTC by Aravinda VK
Modified: 2015-11-23 02:57 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.4.0.34.1u2rhs-1.el6rhs
Doc Type: Bug Fix
Doc Text:
Previously, the XML output of remove-brick and rebalance status commands did not contain host UUIDs of bricks in the <node> section. Host UUIDs had to be manually found by looking into the output of the 'gluster peer status' command output and match that with the volume status output. With this update, the XML outputs for rebalance and remove-brick status contain the host UUID of each node.
Clone Of:
: 1012296 (view as bug list)
Environment:
Last Closed: 2014-02-25 07:39:25 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:0208 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #2 2014-02-25 12:20:30 UTC

Description Aravinda VK 2013-09-23 12:29:36 UTC
Description of problem:
Rebalance/Remove brick shows host as "localhost" for the node in which we run the command, for other hosts it displays the IP address. When xml output is consumed by ovirt-engine/vdsm it fails to map with actual brick host to display in UI. 

If we have brick host UUID(output of gluster system:: uuid get) in xml output, then it is easy to map the brick host. 


Steps to Reproduce:
1. gluster volume rebalance <VOLNAME> status --xml


Actual results:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volRebalance>
    <op>3</op>
    <nodeCount>1</nodeCount>
    <node>
      <nodeName>localhost</nodeName>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <status>3</status>
      <statusStr>completed</statusStr>
    </node>
    <aggregate>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <status>3</status>
      <statusStr>completed</statusStr>
    </aggregate>
  </volRebalance>
</cliOutput>

Expected results:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volRebalance>
    <op>3</op>
    <nodeCount>1</nodeCount>
    <node>
      <nodeName>localhost</nodeName>
      <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <status>3</status>
      <statusStr>completed</statusStr>
    </node>
    <aggregate>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <status>3</status>
      <statusStr>completed</statusStr>
    </aggregate>
  </volRebalance>
</cliOutput>

Additional info:

Comment 3 Pavithra 2014-01-15 13:36:22 UTC
Bala,

Can you please verify the doc text for technical accuracy?

Comment 4 Pavithra 2014-01-16 06:21:10 UTC
Can you please verify the doc text for technical accuracy?

Comment 5 Bala.FA 2014-01-16 09:49:52 UTC
Doc text looks good to me.

Comment 6 senaik 2014-01-16 13:02:31 UTC
Verified : glusterfs 3.4.0.55rhs
=========

The rebalance status xml output shows the brick host UUID in the xml output 


gluster volume rebalance DR status --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volRebalance>
    <task-id>40914141-cd23-4657-a43a-e0ebc215daeb</task-id>
    <op>3</op>
    <nodeCount>4</nodeCount>
    <node>
      <nodeName>localhost</nodeName>
      <id>24a0dfbb-d61d-4457-9050-63279a39bf94</id>
      <files>0</files>
      <size>0</size>
      <lookups>152</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <status>3</status>
      <statusStr>completed</statusStr>
      <runtime>1.00</runtime>
    </node>
    <node>
      <nodeName>10.70.37.144</nodeName>
      <id>d6d88883-2719-4d07-8209-37e89db9a22e</id>
      <files>29</files>
      <size>304087040</size>
      <lookups>179</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <status>3</status>
      <statusStr>completed</statusStr>
      <runtime>12.00</runtime>
    </node>
    <node>
 <nodeName>10.70.37.111</nodeName>
      <id>c594bc18-2b77-4224-9132-d31048d708d6</id>
      <files>0</files>
      <size>0</size>
      <lookups>152</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <status>3</status>
      <statusStr>completed</statusStr>
      <runtime>1.00</runtime>
    </node>
    <node>
      <nodeName>10.70.37.82</nodeName>
      <id>8cbf10ca-8045-4dd6-943e-322131f9916f</id>
      <files>0</files>
      <size>0</size>
      <lookups>153</lookups>
      <failures>0</failures>
      <skipped>28</skipped>
      <status>3</status>
      <statusStr>completed</statusStr>
      <runtime>2.00</runtime>
    </node>
    <aggregate>
      <files>29</files>
      <size>304087040</size>
      <lookups>636</lookups>
      <failures>0</failures>
      <skipped>28</skipped>
      <status>3</status>
      <statusStr>completed</statusStr>
      <runtime>12.00</runtime>
    </aggregate>
  </volRebalance>
</cliOutput>

Comment 8 errata-xmlrpc 2014-02-25 07:39:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html


Note You need to log in before you can comment on or make changes to this bug.