Bug 1278394 - gluster volume status xml output of tiered volume has all the common services tagged under <coldBricks>
gluster volume status xml output of tiered volume has all the common services...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity low
: ---
: RHGS 3.2.0
Assigned To: Dan Lambright
krishnaram Karthick
tier-glusterd
: Triaged
Depends On: 1272318
Blocks: 1294497 1318505 1351522
  Show dependency treegraph
 
Reported: 2015-11-05 06:45 EST by nchilaka
Modified: 2017-03-23 01:24 EDT (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.8.4-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1272318
: 1294497 (view as bug list)
Environment:
Last Closed: 2017-03-23 01:24:28 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
gluster-vol-status.xml (12.16 KB, text/plain)
2016-10-11 09:01 EDT, krishnaram Karthick
no flags Details

  None (edit)
Description nchilaka 2015-11-05 06:45:15 EST
+++ This bug was initially created as a clone of Bug #1272318 +++

Description of problem:
All the common services like Quota, NFS of tiered volume is tagged inside the cold bricks tag.
It would be good if the tier volume gluster volume status xml output has the same tag structure as that of non tier volume xml output.


Version-Release number of selected component (if applicable):
glusterfs-3.7.5-0.18

How reproducible:
Always

Steps to Reproduce:
1. Create tier volume
2. Run gluster volume status --xml

Actual results:

[root@node31 upstream]# gluster volume status tiervol --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>tiervol</volName>
        <nodeCount>10</nodeCount>
        <hotBricks>
          <node>
            <hostname>10.70.46.140</hostname>
            <path>/bricks/brick2/tiervol_tier1</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>49163</port>
            <ports>
              <tcp>49163</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19078</pid>
          </node>
          <node>
            <hostname>10.70.46.174</hostname>
            <path>/bricks/brick2/tiervol_tier0</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>49167</port>
            <ports>
              <tcp>49167</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23585</pid>
          </node>
        </hotBricks>
        <coldBricks>
          <node>
            <hostname>10.70.46.174</hostname>
            <path>/bricks/brick2/tiervol_brick0</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>49165</port>
            <ports>
              <tcp>49165</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23400</pid>
          </node>
          <node>
            <hostname>10.70.46.140</hostname>
            <path>/bricks/brick1/tiervol_brick1</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>49161</port>
            <ports>
              <tcp>49161</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18928</pid>
          </node>
          <node>
            <hostname>10.70.46.174</hostname>
            <path>/bricks/brick3/tiervol_brick2</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>49166</port>
            <ports>
              <tcp>49166</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23418</pid>
          </node>
          <node>
            <hostname>10.70.46.140</hostname>
            <path>/bricks/brick2/tiervol_brick3</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>49162</port>
            <ports>
              <tcp>49162</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18946</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>localhost</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>2049</port>
            <ports>
              <tcp>2049</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23605</pid>
          </node>
          <node>
            <hostname>Quota Daemon</hostname>
            <path>localhost</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23811</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>10.70.46.140</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>2049</port>
            <ports>
              <tcp>2049</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19099</pid>
          </node>
          <node>
            <hostname>Quota Daemon</hostname>
            <path>10.70.46.140</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19260</pid>
          </node>
        </coldBricks>
        <tasks>
          <task>
            <type>Tier migration</type>
            <id>ebdb671e-8371-4507-8be2-96c5db0a49ba</id>
            <status>1</status>
            <statusStr>in progress</statusStr>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Expected results:


Additional info:
Comment 4 hari gowtham 2016-05-24 02:09:52 EDT
upstream patch : http://review.gluster.org/#/c/13101/
3.7 patch : http://review.gluster.org/#/c/13757/
Comment 6 Nithya Balachandran 2016-08-03 03:11:07 EDT
Targeting this BZ for 3.2.0.
Comment 8 Atin Mukherjee 2016-09-17 11:14:43 EDT
Upstream mainline : http://review.gluster.org/13101
Upstream 3.8 : Available as part of branching from mainline

And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.
Comment 11 krishnaram Karthick 2016-10-11 09:01 EDT
Created attachment 1209155 [details]
gluster-vol-status.xml
Comment 12 krishnaram Karthick 2016-10-11 09:16:44 EDT
Verified the fix in build - glusterfs-server-3.8.4-2

hot bricks, cold bricks and process tags are kept separate. Marking the bug as verified.
Comment 14 errata-xmlrpc 2017-03-23 01:24:28 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html

Note You need to log in before you can comment on or make changes to this bug.