+++ This bug was initially created as a clone of Bug #1002403 +++ Description of problem: `gluster volume status all --xml` returns wrong xml output when a gluster volume is down. How reproducible: Run `gluster volume status all --xml` with all volumes in UP state and run again with any one volume is DOWN. Steps to Reproduce: 1. With all volumes in UP state, run `gluster volume status all --xml` and note the CLI output 2. With any one volume in DOWN state and run `gluster volume status all --aml` and note CLI output Actual results: (Example only) <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>-1</opRet> <opErrno>0</opErrno> <opErrstr>Volume v1 is not started</opErrstr> <cliOp>volStatus</cliOp> <output>Volume v1 is not started</output> </cliOutput> <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr>(null)</opErrstr> <volStatus> <volumes> <volume> <volName>dv1</volName> <nodeCount>4</nodeCount> <node> <hostname>10.70.42.152</hostname> <path>/brcks/dvb1</path> <status>1</status> <port>49156</port> <pid>11341</pid> </node> <node> <hostname>10.70.42.152</hostname> <path>/brcks/dvb2</path> <status>1</status> <port>49157</port> <pid>11351</pid> </node> <node> <hostname>10.70.42.152</hostname> <path>/brcks/dvb3</path> <status>0</status> <port>N/A</port> <pid>27642</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <status>0</status> <port>N/A</port> <pid>-1</pid> </node> <tasks> <task> <type>Rebalance</type> <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id> <status>3</status> </task> </tasks> </volume> </volumes> </volStatus> </cliOutput> Expected results: (Example) <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr>(null)</opErrstr> <volStatus> <volumes> <volume> <volName>dv1</volName> <nodeCount>4</nodeCount> <node> <hostname>10.70.42.152</hostname> <path>/brcks/dvb1</path> <status>1</status> <port>49156</port> <pid>11341</pid> </node> <node> <hostname>10.70.42.152</hostname> <path>/brcks/dvb2</path> <status>1</status> <port>49157</port> <pid>11351</pid> </node> <node> <hostname>10.70.42.152</hostname> <path>/brcks/dvb3</path> <status>0</status> <port>N/A</port> <pid>27642</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <status>0</status> <port>N/A</port> <pid>-1</pid> </node> <tasks> <task> <type>Rebalance</type> <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id> <status>3</status> </task> </tasks> </volume> </volumes> </volStatus> </cliOutput>
REVIEW: http://review.gluster.org/5773 (cli: Fix 'status all' xml output when volumes are not started) posted (#1) for review on master by Kaushal M (kaushal)
COMMIT: http://review.gluster.org/5773 committed in master by Vijay Bellur (vbellur) ------ commit 7d9bc0d21408c31651a65a6ec0e67c3b8acd0fde Author: Kaushal M <kaushal> Date: Wed Sep 4 13:06:57 2013 +0530 cli: Fix 'status all' xml output when volumes are not started CLI now only outputs one XML document for 'status all' only containing those volumes which are started. BUG: 1004218 Change-Id: Id4130fe59b3b74475d8bd1cc8134ac59a28f1b7e Signed-off-by: Kaushal M <kaushal> Reviewed-on: http://review.gluster.org/5773 Reviewed-by: Vijay Bellur <vbellur> Tested-by: Gluster Build System <jenkins.com>
REVIEW: http://review.gluster.org/5970 (cli: Fix 'status all' xml output when volumes are not started) posted (#1) for review on release-3.4 by Kaushal M (kaushal)
COMMIT: http://review.gluster.org/5970 committed in release-3.4 by Vijay Bellur (vbellur) ------ commit ac2f281ad3105236b024550bac48395d513260ec Author: Kaushal M <kaushal> Date: Wed Sep 4 13:06:57 2013 +0530 cli: Fix 'status all' xml output when volumes are not started Backport of 7d9bc0d21408c31651a65a6ec0e67c3b8acd0fde from master CLI now only outputs one XML document for 'status all' only containing those volumes which are started. BUG: 1004218 Change-Id: I119ac40282380886b46a09fd9a19d35115fd869d Signed-off-by: Kaushal M <kaushal> Reviewed-on: http://review.gluster.org/5970 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user