Bug 1502691

Summary: gluster volume status all clients --xml gives blank result (no error)
Product: [Community] GlusterFS Reporter: Sanju <srakonde>
Component: cliAssignee: Sanju <srakonde>
Status: CLOSED UPSTREAM QA Contact:
Severity: medium Docs Contact:
Priority: unspecified    
Version: mainlineCC: amukherj, bugs, ccalhoun, gyadav, rhinduja, rhs-bugs, storage-qa-internal
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1500398 Environment:
Last Closed: 2017-10-17 06:18:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1500398    
Bug Blocks:    

Description Sanju 2017-10-16 12:55:52 UTC
+++ This bug was initially created as a clone of Bug #1500398 +++

Description of problem:
  "gluster volume status all clients --xml" is used by gstatus, but "gluster volume status all clients --xml" gives no output, so gstatus is failing as well.

Version-Release number of selected component (if applicable):
  OS: RHEL 7.4
  Kernel: kernel-3.10.0-693.2.2.el7.x86_64
  Gluster: glusterfs-server-3.8.4-44.el7rhgs.x86_64

How reproducible:
  Unable to reproduce in a test environment

Steps to Reproduce:
  N/A

Actual results:
  See attachment.

Expected results:
  Expect gstatus to run and return proper results.

Additional info:
  [collab-shell] https://gitlab.cee.redhat.com/gss-tools/collab-shell

    # ssh your_kerb.redhat.com
    # cd /cases/01938746

  The following files have been downloaded and extracted on collab-shell:
  ---------------------
	1M	redhat_gluster_issue.txt
	16M	sosreport-AKerkhove.01938746-20170927144346.tar.xz
	80M	sosreport-AKerkhove.01938746-20170927144416.tar.xz
  ---------------------
  View attachments here: http://collab-shell.usersys.redhat.com/01938746/

Log Observations: mdb-gsn-01 /var/log/var/log/messages:
-------------------------------------------------------
Sep 25 12:33:49 mdb-gsn-01 python: detected unhandled Python exception in '/bin/gstatus'
Sep 25 12:33:54 mdb-gsn-01 python: communication with ABRT daemon failed: timed out
Sep 25 12:33:54 mdb-gsn-01 snmpd: Traceback (most recent call last):
Sep 25 12:33:54 mdb-gsn-01 snmpd: File "/bin/gstatus", line 221, in <module>
Sep 25 12:33:54 mdb-gsn-01 snmpd: main()
Sep 25 12:33:54 mdb-gsn-01 snmpd: File "/bin/gstatus", line 132, in main
Sep 25 12:33:54 mdb-gsn-01 snmpd: cluster.initialise()
Sep 25 12:33:54 mdb-gsn-01 snmpd: File "/usr/lib/python2.7/site-packages/gstatus/libgluster/cluster.py", line 87, in initialise
Sep 25 12:33:54 mdb-gsn-01 snmpd: set_active_peer()  # setup GlusterCommand class to have a valid node for commands
Sep 25 12:33:54 mdb-gsn-01 snmpd: File "/usr/lib/python2.7/site-packages/gstatus/libcommand/glustercmd.py", line 35, in set_active_peer
Sep 25 12:33:54 mdb-gsn-01 snmpd: with open(peerFile) as peer:
Sep 25 12:33:54 mdb-gsn-01 snmpd: IOError: [Errno 13] Permission denied: '/var/lib/glusterd/peers/9bdabe90-a42e-4dc0-829a-28d7bf07997e'
Sep 25 12:33:54 mdb-gsn-01 snmpd[37044]: Connection from UDP: [172.17.60.91]:48252->[172.17.60.91]:161
Sep 25 12:33:54 mdb-gsn-01 snmpd[37044]: Connection from UDP: [172.17.60.91]:48252->[172.17.60.91]:161
Sep 25 12:33:54 mdb-gsn-01 snmpd[37044]: Connection from UDP: [172.17.60.91]:48252->[172.17.60.91]:161
Sep 25 12:33:54 mdb-gsn-01 snmpd[37044]: Connection from UDP: [172.17.60.91]:48252->[172.17.60.91]:161

--- Additional comment from Cal Calhoun on 2017-10-11 09:54:28 EDT ---

Results from requested command run:

[root@mdb-gsn-01 alex]# gluster volume status all clients --xml    
[root@mdb-gsn-01 alex]# echo $?
2