Hide Forgot
Description of problem: `gluster volume status all --xml` returns wrong xml output when a gluster volume is down. How reproducible: Run `gluster volume status all --xml` with all volumes in UP state and run again with any one volume is DOWN. Steps to Reproduce: 1. With all volumes in UP state, run `gluster volume status all --xml` and note the CLI output 2. With any one volume in DOWN state and run `gluster volume status all --aml` and note CLI output Actual results: (Example only) <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>-1</opRet> <opErrno>0</opErrno> <opErrstr>Volume v1 is not started</opErrstr> <cliOp>volStatus</cliOp> <output>Volume v1 is not started</output> </cliOutput> <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr>(null)</opErrstr> <volStatus> <volumes> <volume> <volName>dv1</volName> <nodeCount>4</nodeCount> <node> <hostname>10.70.42.152</hostname> <path>/brcks/dvb1</path> <status>1</status> <port>49156</port> <pid>11341</pid> </node> <node> <hostname>10.70.42.152</hostname> <path>/brcks/dvb2</path> <status>1</status> <port>49157</port> <pid>11351</pid> </node> <node> <hostname>10.70.42.152</hostname> <path>/brcks/dvb3</path> <status>0</status> <port>N/A</port> <pid>27642</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <status>0</status> <port>N/A</port> <pid>-1</pid> </node> <tasks> <task> <type>Rebalance</type> <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id> <status>3</status> </task> </tasks> </volume> </volumes> </volStatus> </cliOutput> Expected results: (Example) <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr>(null)</opErrstr> <volStatus> <volumes> <volume> <volName>dv1</volName> <nodeCount>4</nodeCount> <node> <hostname>10.70.42.152</hostname> <path>/brcks/dvb1</path> <status>1</status> <port>49156</port> <pid>11341</pid> </node> <node> <hostname>10.70.42.152</hostname> <path>/brcks/dvb2</path> <status>1</status> <port>49157</port> <pid>11351</pid> </node> <node> <hostname>10.70.42.152</hostname> <path>/brcks/dvb3</path> <status>0</status> <port>N/A</port> <pid>27642</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <status>0</status> <port>N/A</port> <pid>-1</pid> </node> <tasks> <task> <type>Rebalance</type> <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id> <status>3</status> </task> </tasks> </volume> </volumes> </volStatus> </cliOutput>
Patch posted for review @ https://code.engineering.redhat.com/gerrit/12481
Tested with glusterfs-3.4.0.51rhs.el6rhs 0. Created a trusted storage pool of 2 RHSS nodes (i.e) peer probe <host-ip> 1. Created 3 volumes ( 1 pure replica, 2 distribute volumes ) (i.e) gluster volume create <vol-name> <brick-path> 2. Started the volumes (i.e) gluster volume start <vol-name> 3. Stop the volume (i.e) gluster volume stop <vol-name> 4. Get the status of all volumes use xml dump (i.e) gluster volume status all --xml The output of the xml dump was consistent, when one of the volume was down.
Kaushal, Can you please verify the doc text for technical accuracy?
Did a minor change to the doc text. The remaining text looks fine.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html