Hide Forgot
Description of problem: Rebalance/Remove brick shows host as "localhost" for the node in which we run the command, for other hosts it displays the IP address. When xml output is consumed by ovirt-engine/vdsm it fails to map with actual brick host to display in UI. If we have brick host UUID(output of gluster system:: uuid get) in xml output, then it is easy to map the brick host. Steps to Reproduce: 1. gluster volume rebalance <VOLNAME> status --xml Actual results: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> Expected results: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> Additional info:
https://code.engineering.redhat.com/gerrit/#/c/13663/
Bala, Can you please verify the doc text for technical accuracy?
Can you please verify the doc text for technical accuracy?
Doc text looks good to me.
Verified : glusterfs 3.4.0.55rhs ========= The rebalance status xml output shows the brick host UUID in the xml output gluster volume rebalance DR status --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <task-id>40914141-cd23-4657-a43a-e0ebc215daeb</task-id> <op>3</op> <nodeCount>4</nodeCount> <node> <nodeName>localhost</nodeName> <id>24a0dfbb-d61d-4457-9050-63279a39bf94</id> <files>0</files> <size>0</size> <lookups>152</lookups> <failures>0</failures> <skipped>0</skipped> <status>3</status> <statusStr>completed</statusStr> <runtime>1.00</runtime> </node> <node> <nodeName>10.70.37.144</nodeName> <id>d6d88883-2719-4d07-8209-37e89db9a22e</id> <files>29</files> <size>304087040</size> <lookups>179</lookups> <failures>0</failures> <skipped>0</skipped> <status>3</status> <statusStr>completed</statusStr> <runtime>12.00</runtime> </node> <node> <nodeName>10.70.37.111</nodeName> <id>c594bc18-2b77-4224-9132-d31048d708d6</id> <files>0</files> <size>0</size> <lookups>152</lookups> <failures>0</failures> <skipped>0</skipped> <status>3</status> <statusStr>completed</statusStr> <runtime>1.00</runtime> </node> <node> <nodeName>10.70.37.82</nodeName> <id>8cbf10ca-8045-4dd6-943e-322131f9916f</id> <files>0</files> <size>0</size> <lookups>153</lookups> <failures>0</failures> <skipped>28</skipped> <status>3</status> <statusStr>completed</statusStr> <runtime>2.00</runtime> </node> <aggregate> <files>29</files> <size>304087040</size> <lookups>636</lookups> <failures>0</failures> <skipped>28</skipped> <status>3</status> <statusStr>completed</statusStr> <runtime>12.00</runtime> </aggregate> </volRebalance> </cliOutput>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html