Description of problem: gluster volume info <invalid_volname> --xml return opRet as 0 in the <opRet> xml tag Version-Release number of selected component (if applicable): 3.4.0beta1 built on May 10 2013 17:55:26 gluster --version glusterfs 3.4.0beta1 built on May 10 2013 17:55:27 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@vdsm_tsm vdsm]# glusterfs --version glusterfs 3.4.0beta1 built on May 10 2013 17:55:26 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. How reproducible: Always Steps to Reproduce: (below dpkvol123 is a invalid volname) gluster volume info dpkvol123 Volume dpkvol123 does not exist [root@vdsm_tsm vdsm]# echo $? 1 gluster volume info dpkvol123 --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volInfo> <volumes> <count>0</count> </volumes> </volInfo> </cliOutput> [root@vdsm_tsm vdsm]# echo $? 0 Actual results: as above Expected results: opRet should return 1 instead of 0 and then the opErrno and other tags should have error info put out! Additional info: FWIW.. this bug came to light while i was trying to use `volume info` from vdsm gluster plugin PYTHONPATH=/usr/share/vdsm/ python Python 2.7.3 (default, Apr 30 2012, 21:18:11) [GCC 4.7.0 20120416 (Red Hat 4.7.0-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import supervdsm as svdsm >>> svdsmProxy = svdsm.getProxy() >>> volInfo = svdsmProxy.glusterVolumeInfo('dpkvol') >>> print volInfo {'dpkvol': {'transportType': ['TCP'], 'uuid': 'a28e2828-d60d-40be-a3ba-75ecec788506', 'bricks': ['192.168.122.53:/home/dpkshetty/brick'], 'volumeName': 'dpkvol', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '1', 'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'options': {}}} >>> volInfo = svdsmProxy.glusterVolumeInfo('dpkvol123') >>> print volInfo {} I also checked the gluster cli for other command as below and --xml is behving correctly for `volume set` tho' as seen below ... gluster volume set dpkvol123 server.allow-insecure on volume set: failed: Volume dpkvol123 does not exist volume set: failed [root@vdsm_tsm vdsm]# echo $? 1 ----- [root@vdsm_tsm vdsm]# gluster volume set dpkvol123 server.allow-insecure on --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>-1</opRet> <opErrno>0</opErrno> <opErrstr>Volume dpkvol123 does not exist</opErrstr> <cliOp>volSet</cliOp> <output>Set volume unsuccessful</output> </cliOutput> [root@vdsm_tsm vdsm]# echo $? 0 Also i didn't checked for *all gluster cli's* so can't say if this issue is only with `volume info` or there could be more. Its good to have someone test out all cli options, so that other failing cmds (if any) can also be fixed thanx, deepak
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5. This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs". If there is no response by the end of the month, this bug will get automatically closed.
GlusterFS 3.4.x has reached end-of-life. If this bug still exists in a later release please reopen this and change the version or open a new bug.