Bug 1268810 - gluster v status --xml for a replicated hot tier volume
gluster v status --xml for a replicated hot tier volume
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
mainline
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: hari gowtham
bugs@gluster.org
: Triaged
Depends On:
Blocks: 1271659
  Show dependency treegraph
 
Reported: 2015-10-05 06:50 EDT by hari gowtham
Modified: 2016-06-16 09:39 EDT (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1271659 (view as bug list)
Environment:
Last Closed: 2016-06-16 09:39:34 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description hari gowtham 2015-10-05 06:50:39 EDT
Description of problem:
The existing status --xml command for a tiered volume with a number of hot bricks is failing

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.gluster volume create tiervol replica 2
gfvm3:/opt/volume_test/tier_vol/b1_1
gfvm3:/opt/volume_test/tier_vol/b1_2
gfvm3:/opt/volume_test/tier_vol/b2_1
gfvm3://opt/volume_test/tier_vol/b2_2
gfvm3:/opt/volume_test/tier_vol/b3_1
gfvm3:/opt/volume_test/tier_vol/b3_2 force

2.gluster volume start tiervol

3.echo 'y' | gluster volume attach-tier tiervol replica  2
gfvm3:/opt/volume_test/tier_vol/b4_1
gfvm3:/opt/volume_test/tier_vol/b4_2
gfvm3:/opt/volume_test/tier_vol/b5_1
gfvm3:/opt/volume_test/tier_vol/b5_2 force

4.gluster v status tiervol --xml

Actual results:


Expected results:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>tiervol</volName>
        <nodeCount>11</nodeCount>
        <hotBricks>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b5_2</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49164</port>
            <ports>
              <tcp>49164</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8684</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b5_1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49163</port>
            <ports>
              <tcp>49163</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8687</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b4_2</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49162</port>
            <ports>
              <tcp>49162</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8699</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b4_1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49161</port>
            <ports>
              <tcp>49161</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8708</pid>
          </node>
        </hotBricks>
        <coldBricks>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b1_1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49155</port>
            <ports>
              <tcp>49155</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8716</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b1_2</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49156</port>
            <ports>
              <tcp>49156</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8724</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b2_1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49157</port>
            <ports>
              <tcp>49157</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8732</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b2_2</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49158</port>
            <ports>
              <tcp>49158</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8740</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b3_1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49159</port>
            <ports>
              <tcp>49159</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8750</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/b3_2</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49160</port>
            <ports>
              <tcp>49160</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8751</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>localhost</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>2049</port>
            <ports>
              <tcp>2049</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8678</pid>
          </node>
        </coldBricks>
        <tasks>
          <task>
            <type>Tier migration</type>
            <id>975bfcfa-077c-4edb-beba-409c2013f637</id>
            <status>1</status>
            <statusStr>in progress</statusStr>
          </task>
        </tasks>
      </volume>
      <volume>
        <volName>v1</volName>
        <nodeCount>4</nodeCount>
        <hotBricks>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/hbr1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49154</port>
            <ports>
              <tcp>49154</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8763</pid>
          </node>
        </hotBricks>
        <coldBricks>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/cb1</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49152</port>
            <ports>
              <tcp>49152</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8769</pid>
          </node>
          <node>
            <hostname>10.70.42.203</hostname>
            <path>/data/gluster/tier/cb2</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>49153</port>
            <ports>
              <tcp>49153</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8778</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>localhost</path>
            <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
            <status>1</status>
            <port>2049</port>
            <ports>
              <tcp>2049</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>8678</pid>
          </node>
        </coldBricks>
        <tasks>
          <task>
            <type>Tier migration</type>
            <id>cfdf6ebf-e4f9-45c5-b8d8-850bfbb426f3</id>
            <status>1</status>
            <statusStr>in progress</statusStr>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Additional info:
Comment 1 Anand Nekkunti 2015-10-08 10:48:57 EDT
patch: http://review.gluster.org/#/c/12302/
Comment 2 nchilaka 2015-11-03 07:59:53 EST
Gluster status --xml for tier vol is working; moving to verified


[root@zod ~]# rpm -qa|grep gluster
glusterfs-libs-3.7.5-5.el7rhgs.x86_64
glusterfs-fuse-3.7.5-5.el7rhgs.x86_64
glusterfs-3.7.5-5.el7rhgs.x86_64
glusterfs-server-3.7.5-5.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-5.el7rhgs.x86_64
glusterfs-cli-3.7.5-5.el7rhgs.x86_64
glusterfs-api-3.7.5-5.el7rhgs.x86_64
glusterfs-debuginfo-3.7.5-5.el7rhgs.x86_64
[root@zod ~]# gluster v status quota_one --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>quota_one</volName>
        <nodeCount>14</nodeCount>
        <hotBricks>
          <node>
            <hostname>yarrow</hostname>
            <path>/dummy/brick101/quota_one_hot</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>1</status>
            <port>49185</port>
            <ports>
              <tcp>49185</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18811</pid>
          </node>
          <node>
            <hostname>zod</hostname>
            <path>/dummy/brick101/quota_one_hot</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>1</status>
            <port>49185</port>
            <ports>
              <tcp>49185</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>20257</pid>
          </node>
          <node>
            <hostname>yarrow</hostname>
            <path>/dummy/brick100/quota_one_hot</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>1</status>
            <port>49184</port>
            <ports>
              <tcp>49184</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18854</pid>
          </node>
          <node>
            <hostname>zod</hostname>
            <path>/dummy/brick100/quota_one_hot</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>1</status>
            <port>49184</port>
            <ports>
              <tcp>49184</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>20275</pid>
          </node>
        </hotBricks>
        <coldBricks>
          <node>
            <hostname>zod</hostname>
            <path>/rhs/brick1/quota_one</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>1</status>
            <port>49182</port>
            <ports>
              <tcp>49182</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>20293</pid>
          </node>
          <node>
            <hostname>yarrow</hostname>
            <path>/rhs/brick1/quota_one</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>1</status>
            <port>49182</port>
            <ports>
              <tcp>49182</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18883</pid>
          </node>
          <node>
            <hostname>zod</hostname>
            <path>/rhs/brick2/quota_one</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>1</status>
            <port>49183</port>
            <ports>
              <tcp>49183</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>20311</pid>
          </node>
          <node>
            <hostname>yarrow</hostname>
            <path>/rhs/brick2/quota_one</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>1</status>
            <port>49183</port>
            <ports>
              <tcp>49183</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>18901</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>localhost</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>0</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>-1</pid>
          </node>
          <node>
            <hostname>Self-heal Daemon</hostname>
            <path>localhost</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>20347</pid>
          </node>
          <node>
            <hostname>Quota Daemon</hostname>
            <path>localhost</path>
            <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>20356</pid>
          </node>
          <node>
            <hostname>NFS Server</hostname>
            <path>10.70.34.43</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>0</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>-1</pid>
          </node>
          <node>
            <hostname>Self-heal Daemon</hostname>
            <path>10.70.34.43</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19003</pid>
          </node>
          <node>
            <hostname>Quota Daemon</hostname>
            <path>10.70.34.43</path>
            <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid>
            <status>1</status>
            <port>N/A</port>
            <ports>
              <tcp>N/A</tcp>
              <rdma>N/A</rdma>
            </ports>
            <pid>19012</pid>
          </node>
        </coldBricks>
        <tasks>
          <task>
            <type>Tier migration</type>
            <id>eae47ea7-aea5-4220-8f1d-c6cfc145875d</id>
            <status>1</status>
            <statusStr>in progress</statusStr>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>
[root@zod ~]#
Comment 3 Niels de Vos 2016-06-16 09:39:34 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.