Bug 1278394 - gluster volume status xml output of tiered volume has all the common services tagged under <coldBricks>
Summary: gluster volume status xml output of tiered volume has all the common services...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: RHGS 3.2.0
Assignee: Dan Lambright
QA Contact: krishnaram Karthick
URL:
Whiteboard: tier-glusterd
Depends On: 1272318
Blocks: 1294497 1318505 1351522
TreeView+ depends on / blocked
 
Reported: 2015-11-05 11:45 UTC by Nag Pavan Chilakam
Modified: 2017-03-23 05:24 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.8.4-1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1272318
: 1294497 (view as bug list)
Environment:
Last Closed: 2017-03-23 05:24:28 UTC
Embargoed:


Attachments (Terms of Use)
gluster-vol-status.xml (12.16 KB, text/plain)
2016-10-11 13:01 UTC, krishnaram Karthick
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Nag Pavan Chilakam 2015-11-05 11:45:15 UTC
+++ This bug was initially created as a clone of Bug #1272318 +++

Description of problem:
All the common services like Quota, NFS of tiered volume is tagged inside the cold bricks tag.
It would be good if the tier volume gluster volume status xml output has the same tag structure as that of non tier volume xml output.


Version-Release number of selected component (if applicable):
glusterfs-3.7.5-0.18

How reproducible:
Always

Steps to Reproduce:
1. Create tier volume
2. Run gluster volume status --xml

Actual results:

[root@node31 upstream]# gluster volume status tiervol --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>tiervol</volName>
        <nodeCount>10</nodeCount>
        <hotBricks>
          <node>
            <hostname>10.70.46.140</hostname>
            <path>/bricks/brick2/tiervol_tier1</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>49163</port>
            <ports>
              <tcp>49163</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19078</pid>
          </node>
          <node>
            <hostname>10.70.46.174</hostname>
            <path>/bricks/brick2/tiervol_tier0</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>49167</port>
            <ports>
              <tcp>49167</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23585</pid>
          </node>
        </hotBricks>
        <coldBricks>
          <node>
            <hostname>10.70.46.174</hostname>
            <path>/bricks/brick2/tiervol_brick0</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>49165</port>
            <ports>
              <tcp>49165</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23400</pid>
          </node>
          <node>
            <hostname>10.70.46.140</hostname>
            <path>/bricks/brick1/tiervol_brick1</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>49161</port>
            <ports>
              <tcp>49161</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18928</pid>
          </node>
          <node>
            <hostname>10.70.46.174</hostname>
            <path>/bricks/brick3/tiervol_brick2</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>49166</port>
            <ports>
              <tcp>49166</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23418</pid>
          </node>
          <node>
            <hostname>10.70.46.140</hostname>
            <path>/bricks/brick2/tiervol_brick3</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>49162</port>
            <ports>
              <tcp>49162</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18946</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>localhost</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>2049</port>
            <ports>
              <tcp>2049</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23605</pid>
          </node>
          <node>
            <hostname>Quota Daemon</hostname>
            <path>localhost</path>
            <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>23811</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>10.70.46.140</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>2049</port>
            <ports>
              <tcp>2049</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19099</pid>
          </node>
          <node>
            <hostname>Quota Daemon</hostname>
            <path>10.70.46.140</path>
            <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19260</pid>
          </node>
        </coldBricks>
        <tasks>
          <task>
            <type>Tier migration</type>
            <id>ebdb671e-8371-4507-8be2-96c5db0a49ba</id>
            <status>1</status>
            <statusStr>in progress</statusStr>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Expected results:


Additional info:

Comment 4 hari gowtham 2016-05-24 06:09:52 UTC
upstream patch : http://review.gluster.org/#/c/13101/
3.7 patch : http://review.gluster.org/#/c/13757/

Comment 6 Nithya Balachandran 2016-08-03 07:11:07 UTC
Targeting this BZ for 3.2.0.

Comment 8 Atin Mukherjee 2016-09-17 15:14:43 UTC
Upstream mainline : http://review.gluster.org/13101
Upstream 3.8 : Available as part of branching from mainline

And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.

Comment 11 krishnaram Karthick 2016-10-11 13:01:32 UTC
Created attachment 1209155 [details]
gluster-vol-status.xml

Comment 12 krishnaram Karthick 2016-10-11 13:16:44 UTC
Verified the fix in build - glusterfs-server-3.8.4-2

hot bricks, cold bricks and process tags are kept separate. Marking the bug as verified.

Comment 14 errata-xmlrpc 2017-03-23 05:24:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.