Bug 1467807 - gluster volume status --xml fails when there are 100 volumes [NEEDINFO]
gluster volume status --xml fails when there are 100 volumes
Status: VERIFIED
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: cli (Show other bugs)
3.3
x86_64 Linux
unspecified Severity urgent
: ---
: RHGS 3.3.0
Assigned To: Atin Mukherjee
Anil Shah
:
Depends On: 1467841 1470488 1470495
Blocks: 1417151
  Show dependency treegraph
 
Reported: 2017-07-05 04:22 EDT by Anil Shah
Modified: 2017-07-19 07:53 EDT (History)
10 users (show)

See Also:
Fixed In Version: glusterfs-3.8.4-33
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1467841 (view as bug list)
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
rhinduja: needinfo? (ashah)


Attachments (Terms of Use)

  None (edit)
Description Anil Shah 2017-07-05 04:22:12 EDT
Description of problem:

when there are large number of volumes in my case 100 volumes, gluster v status --xml command fails

Version-Release number of selected component (if applicable):

glusterfs-3.8.4-18.el7rhgs.x86_64

How reproducible:

100%

Steps to Reproduce:
1. create 100 volume
2. execure gluster v status --xml
3. 

Actual results:

[root@dhcp46-22 ~]# gluster v status all clients --xml
[root@dhcp46-22 ~]# echo $?
92

Command fails without giving proper xml

Expected results:

gluster v status --xml  should xml output of command

Additional info:
Comment 2 Anil Shah 2017-07-05 04:26:53 EDT
Since this bug is related to customer BZ# 1454544 which has to go in 3.3.
Marking this bug as blocker.
Comment 3 Atin Mukherjee 2017-07-05 06:13:53 EDT
upstream patch : https://review.gluster.org/#/c/17702
Comment 6 Atin Mukherjee 2017-07-06 06:06:51 EDT
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/111200
Comment 13 Anil Shah 2017-07-17 06:11:46 EDT
gluster v status all clients --xml works every time when there are 100 volumes.


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>gluster_shared_storage</volName>
        <nodeCount>3</nodeCount>
        <node>
          <hostname>10.70.47.192</hostname>
          <path>/var/lib/glusterd/ss_brick</path>
          <peerid>ec9efcbd-a629-4fe8-854d-3ce3c1c30cb6</peerid>
          <status>1</status>
          <port>49157</port>
          <ports>
            <tcp>49157</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>24963</pid>
          <clientsStatus>
            <clientCount>2</clientCount>
            <client>
              <hostname>10.70.46.22:337</hostname>
              <bytesRead>1108</bytesRead>
              <bytesWrite>668</bytesWrite>
              <opVersion>31101</opVersion>
            </client>
            <client>
              <hostname>10.70.47.56:337</hostname>



bug verified on build glusterfs-3.8.4-33.el7rhgs.x86_64

Note You need to log in before you can comment on or make changes to this bug.