Bug 1271659 - gluster v status --xml for a replicated hot tier volume
Summary: gluster v status --xml for a replicated hot tier volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.1.2
Assignee: hari gowtham
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1268810
Blocks: 1260783 1260923
TreeView+ depends on / blocked
 
Reported: 2015-10-14 12:49 UTC by hari gowtham
Modified: 2016-09-17 15:42 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.7.5-0.3
Doc Type: Bug Fix
Doc Text:
Clone Of: 1268810
Environment:
Last Closed: 2016-03-01 05:38:57 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description hari gowtham 2015-10-14 12:49:34 UTC
+++ This bug was initially created as a clone of Bug #1268810 +++

Description of problem:
The existing status --xml command for a tiered volume with a number of hot bricks is failing

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.gluster volume create tiervol replica 2
gfvm3:/opt/volume_test/tier_vol/b1_1
gfvm3:/opt/volume_test/tier_vol/b1_2
gfvm3:/opt/volume_test/tier_vol/b2_1
gfvm3://opt/volume_test/tier_vol/b2_2
gfvm3:/opt/volume_test/tier_vol/b3_1
gfvm3:/opt/volume_test/tier_vol/b3_2 force

2.gluster volume start tiervol

3.echo 'y' | gluster volume attach-tier tiervol replica  2
gfvm3:/opt/volume_test/tier_vol/b4_1
gfvm3:/opt/volume_test/tier_vol/b4_2
gfvm3:/opt/volume_test/tier_vol/b5_1
gfvm3:/opt/volume_test/tier_vol/b5_2 force

4.gluster v status tiervol --xml

Actual results:


Expected results:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>tiervol</volName>
        <nodeCount>11</nodeCount>
        <hotBricks>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b5_2</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49164</port>
            <ports>
              <tcp>49164</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8684</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b5_1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49163</port>
            <ports>
              <tcp>49163</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8687</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b4_2</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49162</port>
            <ports>
              <tcp>49162</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8699</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b4_1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49161</port>
            <ports>
              <tcp>49161</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8708</pid>
          </node>
        </hotBricks>
        <coldBricks>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b1_1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49155</port>
            <ports>
              <tcp>49155</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8716</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b1_2</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49156</port>
            <ports>
              <tcp>49156</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8724</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b2_1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49157</port>
            <ports>
              <tcp>49157</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8732</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b2_2</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49158</port>
            <ports>
              <tcp>49158</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8740</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b3_1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49159</port>
            <ports>
              <tcp>49159</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8750</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b3_2</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49160</port>
            <ports>
              <tcp>49160</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8751</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>localhost</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>2049</port>
            <ports>
              <tcp>2049</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8678</pid>
          </node>
        </coldBricks>
        <tasks>
          <task>
            <type>Tier migration</type>
            <id>975bfcfa-077c-4edb-beba-409c2013f637</id>
            <status>1</status>
            <statusStr>in progress</statusStr>
          </task>
        </tasks>
      </volume>
      <volume>
        <volName>v1</volName>
        <nodeCount>4</nodeCount>
        <hotBricks>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/hbr1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49154</port>
            <ports>
              <tcp>49154</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8763</pid>
          </node>
        </hotBricks>
        <coldBricks>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/cb1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49152</port>
            <ports>
              <tcp>49152</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8769</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/cb2</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49153</port>
            <ports>
              <tcp>49153</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8778</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>localhost</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>2049</port>
            <ports>
              <tcp>2049</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8678</pid>
          </node>
        </coldBricks>
        <tasks>
          <task>
            <type>Tier migration</type>
            <id>cfdf6ebf-e4f9-45c5-b8d8-850bfbb426f3</id>
            <status>1</status>
            <statusStr>in progress</statusStr>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Additional info:

--- Additional comment from Anand Nekkunti on 2015-10-08 10:48:57 EDT ---

patch: http://review.gluster.org/#/c/12302/

Comment 3 Nag Pavan Chilakam 2015-11-03 13:00:48 UTC
Gluster status --xml for tier vol is working; moving to verified


[root@zod ~]# rpm -qa|grep gluster
glusterfs-libs-3.7.5-5.el7rhgs.x86_64
glusterfs-fuse-3.7.5-5.el7rhgs.x86_64
glusterfs-3.7.5-5.el7rhgs.x86_64
glusterfs-server-3.7.5-5.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-5.el7rhgs.x86_64
glusterfs-cli-3.7.5-5.el7rhgs.x86_64
glusterfs-api-3.7.5-5.el7rhgs.x86_64
glusterfs-debuginfo-3.7.5-5.el7rhgs.x86_64
[root@zod ~]# gluster v status quota_one --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>quota_one</volName>
        <nodeCount>14</nodeCount>
        <hotBricks>
          <node>
            <hostname>yarrow</hostname>
            <path>/dummy/brick101/quota_one_hot</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>1</status>
            <port>49185</port>
            <ports>
              <tcp>49185</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18811</pid>
          </node>
          <node>
            <hostname>zod</hostname>
            <path>/dummy/brick101/quota_one_hot</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>1</status>
            <port>49185</port>
            <ports>
              <tcp>49185</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>20257</pid>
          </node>
          <node>
            <hostname>yarrow</hostname>
            <path>/dummy/brick100/quota_one_hot</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>1</status>
            <port>49184</port>
            <ports>
              <tcp>49184</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18854</pid>
          </node>
          <node>
            <hostname>zod</hostname>
            <path>/dummy/brick100/quota_one_hot</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>1</status>
            <port>49184</port>
            <ports>
              <tcp>49184</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>20275</pid>
          </node>
        </hotBricks>
        <coldBricks>
          <node>
            <hostname>zod</hostname>
            <path>/rhs/brick1/quota_one</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>1</status>
            <port>49182</port>
            <ports>
              <tcp>49182</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>20293</pid>
          </node>
          <node>
            <hostname>yarrow</hostname>
            <path>/rhs/brick1/quota_one</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>1</status>
            <port>49182</port>
            <ports>
              <tcp>49182</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18883</pid>
          </node>
          <node>
            <hostname>zod</hostname>
            <path>/rhs/brick2/quota_one</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>1</status>
            <port>49183</port>
            <ports>
              <tcp>49183</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>20311</pid>
          </node>
          <node>
            <hostname>yarrow</hostname>
            <path>/rhs/brick2/quota_one</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>1</status>
            <port>49183</port>
            <ports>
              <tcp>49183</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18901</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>localhost</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>0</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>-1</pid>
          </node>
          <node>
            <hostname>Self-heal Daemon</hostname>
            <path>localhost</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>20347</pid>
          </node>
          <node>
            <hostname>Quota Daemon</hostname>
            <path>localhost</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>20356</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>10.70.34.43</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>0</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>-1</pid>
          </node>
          <node>
            <hostname>Self-heal Daemon</hostname>
            <path>10.70.34.43</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19003</pid>
          </node>
          <node>
            <hostname>Quota Daemon</hostname>
            <path>10.70.34.43</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19012</pid>
          </node>
        </coldBricks>
        <tasks>
          <task>
            <type>Tier migration</type>
            <id>eae47ea7-aea5-4220-8f1d-c6cfc145875d</id>
            <status>1</status>
            <statusStr>in progress</statusStr>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>
[root@zod ~]#

Comment 5 errata-xmlrpc 2016-03-01 05:38:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.