Bug 1319886 - gluster volume info --xml returns 0 for nonexistent volume
Summary: gluster volume info --xml returns 0 for nonexistent volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd
Version: rhgs-3.1
Hardware: Unspecified
OS: Linux
unspecified
low
Target Milestone: ---
: RHGS 3.2.0
Assignee: Samikshan Bairagya
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks: 1321836 1351522 1352880
TreeView+ depends on / blocked
 
Reported: 2016-03-21 18:17 UTC by Jonathan Holloway
Modified: 2017-03-23 05:28 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.8.4-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1321836 (view as bug list)
Environment:
Last Closed: 2017-03-23 05:28:06 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Jonathan Holloway 2016-03-21 18:17:20 UTC
Description of problem:
gluster volume info <nonexistent_volname> --xml and gluster volume status <nonexistent_volname> --xml return 0

Version-Release number of selected component (if applicable):
3.7.5-19 and earlier

How reproducible:
Everytime

Steps to Reproduce:
1. Start glusterd
2. Execute gluster volume info <nonexistent_volume> --xml
     # gluster volume info sdksjsdjl --xml; echo $?
3. Execute gluster volume status <nonexistent_volume> --xml
     # gluster volume info sdksjsdjl --xml; echo $?

Actual results:
Both commands output a return code of 0.
Without --xml, they return 1.

Expected results:
Both commands with nonexistent volume and --xml option should return a non-zero returncode (preferably 1 to match command without --xml)

Additional info:
[root@x ~]# gluster volume info sdksjsdjl --xml; echo $?
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volInfo>
    <volumes>
      <count>0</count>
    </volumes>
  </volInfo>
</cliOutput>
0

[root@x ~]# gluster volume status sdksjsdjl --xml; echo $?
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>-1</opRet>
  <opErrno>30800</opErrno>
  <opErrstr>Volume sdksjsdjl does not exist</opErrstr>
  <cliOp>volStatus</cliOp>
  <output>Volume sdksjsdjl does not exist</output>
</cliOutput>
0

[root@x ~]# gluster volume info sdksjsdjl; echo $?
Volume sdksjsdjl does not exist
1

[root@x ~]# gluster volume status sdksjsdjl; echo $?
Volume sdksjsdjl does not exist
1

Comment 2 Atin Mukherjee 2016-06-23 16:54:11 UTC
http://review.gluster.org/13843 is merged into upstream.

Comment 3 Atin Mukherjee 2016-06-23 16:55:00 UTC
I've updated the summary since the issue is only with info command.

Comment 5 Atin Mukherjee 2016-09-17 15:02:40 UTC
Upstream mainline : http://review.gluster.org/13843
Upstream 3.8 : http://review.gluster.org/14863

And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.

Comment 10 Manisha Saini 2016-10-26 13:55:55 UTC
Verified this Bug on 3.2.0

# cat /etc/redhat-storage-release 
Red Hat Gluster Storage Server 3.2.0

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.3 (Maipo)

# rpm -qa | grep gluster
glusterfs-3.8.4-2.el7rhgs.x86_64
glusterfs-cli-3.8.4-2.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-libs-3.8.4-2.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64
glusterfs-api-3.8.4-2.el7rhgs.x86_64
glusterfs-server-3.8.4-2.el7rhgs.x86_64
gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.el7rhgs.noarch
glusterfs-fuse-3.8.4-2.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64
python-gluster-3.8.4-2.el7rhgs.noarch


Steps:
1.Start glusterd
2. Execute gluster volume info <nonexistent_volume> --xml
     # gluster volume info dist_replicaa --xml; echo $?

3. Execute gluster volume status <nonexistent_volume> --xml
     # gluster volume info dist_replicaa --xml; echo $?


Output:

[root@dhcp37-55 bricks]# gluster v info dist_replicaa --xml ; echo $?
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>-1</opRet>
  <opErrno>30806</opErrno>
  <opErrstr>Volume does not exist</opErrstr>
  <volInfo>
    <volumes>
      <count>0</count>
    </volumes>
  </volInfo>
</cliOutput>
0

[root@dhcp37-55 bricks]# gluster v status dist_replicaa --xml ; echo $?
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>-1</opRet>
  <opErrno>30800</opErrno>
  <opErrstr>Volume dist_replicaa does not exist</opErrstr>
  <cliOp>volStatus</cliOp>
  <output>Volume dist_replicaa does not exist</output>
</cliOutput>
0

[root@dhcp37-55 bricks]# gluster v status dist_replicaa  ; echo $?
Volume dist_replicaa does not exist
1

[root@dhcp37-55 bricks]# gluster v info dist_replicaa  ; echo $?
Volume dist_replicaa does not exist
1


Both XML command return code of 1
Without --xml,  return code  1

Hence marking this Bug as Verified

Comment 12 errata-xmlrpc 2017-03-23 05:28:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.