Bug 1272318 - gluster volume status xml output of tiered volume has all the common services tagged under <coldBricks>
gluster volume status xml output of tiered volume has all the common services...
Status: CLOSED EOL
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
3.7.5
Unspecified Unspecified
unspecified Severity low
: ---
: ---
Assigned To: hari gowtham
bugs@gluster.org
: Triaged
Depends On:
Blocks: 1278394 1294497 1318505
  Show dependency treegraph
 
Reported: 2015-10-16 01:18 EDT by Arthy Loganathan
Modified: 2017-03-08 05:54 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1278394 (view as bug list)
Environment:
Last Closed: 2017-03-08 05:54:10 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Arthy Loganathan 2015-10-16 01:18:22 EDT
Description of problem:
All the common services like Quota, NFS of tiered volume is tagged inside the cold bricks tag.
It would be good if the tier volume gluster volume status xml output has the same tag structure as that of non tier volume xml output.


Version-Release number of selected component (if applicable):
glusterfs-3.7.5-0.18

How reproducible:
Always

Steps to Reproduce:
1. Create tier volume
2. Run gluster volume status --xml

Actual results:

[root@node31 upstream]# gluster volume status tiervol --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>tiervol</volName>
        <nodeCount>10</nodeCount>
        <hotBricks>
          <node>
            <hostname>10.70.46.140</hostname>
            <path>/bricks/brick2/tiervol_tier1</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>49163</port>
            <ports>
              <tcp>49163</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19078</pid>
          </node>
          <node>
            <hostname>10.70.46.174</hostname>
            <path>/bricks/brick2/tiervol_tier0</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>49167</port>
            <ports>
              <tcp>49167</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23585</pid>
          </node>
        </hotBricks>
        <coldBricks>
          <node>
            <hostname>10.70.46.174</hostname>
            <path>/bricks/brick2/tiervol_brick0</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>49165</port>
            <ports>
              <tcp>49165</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23400</pid>
          </node>
          <node>
            <hostname>10.70.46.140</hostname>
            <path>/bricks/brick1/tiervol_brick1</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>49161</port>
            <ports>
              <tcp>49161</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18928</pid>
          </node>
          <node>
            <hostname>10.70.46.174</hostname>
            <path>/bricks/brick3/tiervol_brick2</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>49166</port>
            <ports>
              <tcp>49166</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23418</pid>
          </node>
          <node>
            <hostname>10.70.46.140</hostname>
            <path>/bricks/brick2/tiervol_brick3</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>49162</port>
            <ports>
              <tcp>49162</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18946</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>localhost</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>2049</port>
            <ports>
              <tcp>2049</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23605</pid>
          </node>
          <node>
            <hostname>Quota Daemon</hostname>
            <path>localhost</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23811</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>10.70.46.140</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>2049</port>
            <ports>
              <tcp>2049</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19099</pid>
          </node>
          <node>
            <hostname>Quota Daemon</hostname>
            <path>10.70.46.140</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19260</pid>
          </node>
        </coldBricks>
        <tasks>
          <task>
            <type>Tier migration</type>
            <id>ebdb671e-8371-4507-8be2-96c5db0a49ba</id>
            <status>1</status>
            <statusStr>in progress</statusStr>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Expected results:


Additional info:
Comment 1 Kaushal 2017-03-08 05:54:10 EST
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Note You need to log in before you can comment on or make changes to this bug.