Bug 1467807 - gluster volume status --xml fails when there are 100 volumes
Summary: gluster volume status --xml fails when there are 100 volumes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: cli
Version: rhgs-3.3
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.3.0
Assignee: Atin Mukherjee
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On: 1467841 1470488 1470495
Blocks: 1417151
TreeView+ depends on / blocked
 
Reported: 2017-07-05 08:22 UTC by Anil Shah
Modified: 2018-01-15 08:32 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.8.4-33
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1467841 (view as bug list)
Environment:
Last Closed: 2017-09-21 05:02:13 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2774 0 normal SHIPPED_LIVE glusterfs bug fix and enhancement update 2017-09-21 08:16:29 UTC

Description Anil Shah 2017-07-05 08:22:12 UTC
Description of problem:

when there are large number of volumes in my case 100 volumes, gluster v status --xml command fails

Version-Release number of selected component (if applicable):

glusterfs-3.8.4-18.el7rhgs.x86_64

How reproducible:

100%

Steps to Reproduce:
1. create 100 volume
2. execure gluster v status --xml
3. 

Actual results:

[root@dhcp46-22 ~]# gluster v status all clients --xml
[root@dhcp46-22 ~]# echo $?
92

Command fails without giving proper xml

Expected results:

gluster v status --xml  should xml output of command

Additional info:

Comment 2 Anil Shah 2017-07-05 08:26:53 UTC
Since this bug is related to customer BZ# 1454544 which has to go in 3.3.
Marking this bug as blocker.

Comment 3 Atin Mukherjee 2017-07-05 10:13:53 UTC
upstream patch : https://review.gluster.org/#/c/17702

Comment 6 Atin Mukherjee 2017-07-06 10:06:51 UTC
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/111200

Comment 13 Anil Shah 2017-07-17 10:11:46 UTC
gluster v status all clients --xml works every time when there are 100 volumes.


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>gluster_shared_storage</volName>
        <nodeCount>3</nodeCount>
        <node>
          <hostname>10.70.47.192</hostname>
          <path>/var/lib/glusterd/ss_brick</path>
          <peerid>ec9efcbd-a629-4fe8-854d-3ce3c1c30cb6</peerid>
          <status>1</status>
          <port>49157</port>
          <ports>
            <tcp>49157</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>24963</pid>
          <clientsStatus>
            <clientCount>2</clientCount>
            <client>
              <hostname>10.70.46.22:337</hostname>
              <bytesRead>1108</bytesRead>
              <bytesWrite>668</bytesWrite>
              <opVersion>31101</opVersion>
            </client>
            <client>
              <hostname>10.70.47.56:337</hostname>



bug verified on build glusterfs-3.8.4-33.el7rhgs.x86_64

Comment 21 errata-xmlrpc 2017-09-21 05:02:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774


Note You need to log in before you can comment on or make changes to this bug.