Bug 1269125 - Data Tiering:Regression: automation blocker:vol status for tier volumes using xml format is not working
Data Tiering:Regression: automation blocker:vol status for tier volumes using...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
3.7.4
Unspecified Unspecified
urgent Severity high
: ---
: ---
Assigned To: hari gowtham
bugs@gluster.org
:
Depends On:
Blocks: 1260923 glusterfs-3.7.6
  Show dependency treegraph
 
Reported: 2015-10-06 07:50 EDT by nchilaka
Modified: 2015-11-17 00:58 EST (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.7.6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-11-17 00:58:48 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description nchilaka 2015-10-06 07:50:53 EDT
Description of problem:
========================
Most of our automation scripts use xml formatting for parsing values.
But the xml o/p of vol status of a tier volume is not working. it was working previously.
This is a serious blocker for automation runs


Version-Release number of selected component (if applicable):
==============================================================
Not working on 3.7.4-0.65 and 3.7.4-0.67
(was working on 3.7.4-0.63)


How reproducible:
===================
easily

Steps to Reproduce:
1.)create a tier volume and start it 
2.Now issue status of the vol using xml "vol status <vname> --xml"

Actual results:
===============
no o/p is displayed for tier volume, but works well for regular vol




Tier vol o/p:
============
[root@zod ~]# gluster v status bulgaria
Status of volume: bulgaria
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick yarrow:/rhs/brick7/bulgaria_hot       49155     0          Y       8067 
Brick zod:/rhs/brick7/bulgaria_hot          49155     0          Y       30108
Brick yarrow:/rhs/brick6/bulgaria_hot       49154     0          Y       8087 
Brick zod:/rhs/brick6/bulgaria_hot          49154     0          Y       30126
Cold Bricks:
Brick zod:/rhs/brick1/bulgaria              49152     0          Y       30144
Brick yarrow:/rhs/brick1/bulgaria           49152     0          Y       8105 
Brick zod:/rhs/brick2/bulgaria              49153     0          Y       30162
Brick yarrow:/rhs/brick2/bulgaria           49153     0          Y       8127 
NFS Server on localhost                     2049      0          Y       1923 
Quota Daemon on localhost                   N/A       N/A        Y       30200
NFS Server on yarrow                        2049      0          Y       11924
Quota Daemon on yarrow                      N/A       N/A        Y       8234 
 
Task Status of Volume bulgaria
------------------------------------------------------------------------------
Task                 : Tier migration      
ID                   : 55803993-94aa-407b-b085-b2942c32a289
Status               : in progress         
 
[root@zod ~]# gluster vol status bulgaria  --xml






Regular vol o/p
================
[root@zod ~]# gluster v status testvol
Status of volume: testvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick zod:/rhs/brick1/testvol               49156     0          Y       1900 
Brick yarrow:/rhs/brick1/testvol            49156     0          Y       11883
NFS Server on localhost                     2049      0          Y       1923 
NFS Server on yarrow                        2049      0          Y       11924
 
Task Status of Volume testvol
------------------------------------------------------------------------------
There are no active volume tasks
[root@zod ~]# gluster vol status testvol --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>testvol</volName>
        <nodeCount>4</nodeCount>
        <node>
          <hostname>zod</hostname>
          <path>/rhs/brick1/testvol</path>
          <peerid>6aa15961-3ab6-4b6f-8412-8e084b7dfac7</peerid>
          <status>1</status>
          <port>49156</port>
          <ports>
            <tcp>49156</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>1900</pid>
        </node>
        <node>
          <hostname>yarrow</hostname>
          <path>/rhs/brick1/testvol</path>
          <peerid>dcfd0e0f-ad37-4447-801c-5c201a4da70f</peerid>
          <status>1</status>
          <port>49156</port>
          <ports>
            <tcp>49156</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>11883</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <peerid>6aa15961-3ab6-4b6f-8412-8e084b7dfac7</peerid>
          <status>1</status>
          <port>2049</port>
          <ports>
            <tcp>2049</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>1923</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>yarrow</path>
          <peerid>dcfd0e0f-ad37-4447-801c-5c201a4da70f</peerid>
          <status>1</status>
          <port>2049</port>
          <ports>
            <tcp>2049</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>11924</pid>
        </node>
        <tasks/>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>
[root@zod ~]#
Comment 1 Vivek Agarwal 2015-10-08 07:05:42 EDT
http://review.gluster.org/#/c/12302/ in master.
Comment 2 hari gowtham 2015-10-09 03:05:18 EDT
The patch url is: http://review.gluster.org/#/c/12322/
Comment 3 Vijay Bellur 2015-10-10 05:41:36 EDT
REVIEW: http://review.gluster.org/12322 (gluster v status --xml for a replicated hot tier volume) posted (#2) for review on release-3.7 by hari gowtham (hari.gowtham005@gmail.com)
Comment 4 Vijay Bellur 2015-10-10 14:32:13 EDT
COMMIT: http://review.gluster.org/12322 committed in release-3.7 by Dan Lambright (dlambrig@redhat.com) 
------
commit e3e25e81e53fb8c5fdea315a52bca73e3176ef05
Author: hari gowtham <hgowtham@redhat.com>
Date:   Mon Oct 5 16:17:02 2015 +0530

    gluster v status --xml for a replicated hot tier volume
    
            back port of : http://review.gluster.org/#/c/12302/
    
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <cliOutput>
      <opRet>0</opRet>
      <opErrno>0</opErrno>
      <opErrstr/>
      <volStatus>
        <volumes>
          <volume>
            <volName>tiervol</volName>
            <nodeCount>11</nodeCount>
            <hotBricks>
              <node>
                <hostname>10.70.42.203</hostname>
                <path>/data/gluster/tier/b5_2</path>
                <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
                <status>1</status>
                <port>49164</port>
                <ports>
                  <tcp>49164</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>8684</pid>
              </node>
              <node>
                <hostname>10.70.42.203</hostname>
                <path>/data/gluster/tier/b5_1</path>
                <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
                <status>1</status>
                <port>49163</port>
                <ports>
                  <tcp>49163</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>8687</pid>
              </node>
              <node>
                <hostname>10.70.42.203</hostname>
                <path>/data/gluster/tier/b4_2</path>
                <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
                <status>1</status>
                <port>49162</port>
                <ports>
                  <tcp>49162</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>8699</pid>
              </node>
              <node>
                <hostname>10.70.42.203</hostname>
                <path>/data/gluster/tier/b4_1</path>
                <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
                <status>1</status>
                <port>49161</port>
                <ports>
                  <tcp>49161</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>8708</pid>
              </node>
            </hotBricks>
            <coldBricks>
              <node>
                <hostname>10.70.42.203</hostname>
                <path>/data/gluster/tier/b1_1</path>
                <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
                <status>1</status>
                <port>49155</port>
                <ports>
                  <tcp>49155</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>8716</pid>
              </node>
              <node>
                <hostname>10.70.42.203</hostname>
                <path>/data/gluster/tier/b1_2</path>
                <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
                <status>1</status>
                <port>49156</port>
                <ports>
                  <tcp>49156</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>8724</pid>
              </node>
              <node>
                <hostname>NFS Server</hostname>
                <path>localhost</path>
                <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
                <status>1</status>
                <port>2049</port>
                <ports>
                  <tcp>2049</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>8678</pid>
              </node>
            </coldBricks>
            <tasks>
              <task>
                <type>Tier migration</type>
                <id>975bfcfa-077c-4edb-beba-409c2013f637</id>
                <status>1</status>
                <statusStr>in progress</statusStr>
              </task>
            </tasks>
          </volume>
        </volumes>
      </volStatus>
    </cliOutput>
    >Change-Id: I69252a36b6e6b2f3cbe5db06e9a716f504a1dba4
    >BUG: 1268810
    >Signed-off-by: hari gowtham <hgowtham@redhat.com>
    >Reviewed-on: http://review.gluster.org/12302
    >Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    >Tested-by: Gluster Build System <jenkins@build.gluster.com>
    >Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
    >Reviewed-by: Dan Lambright <dlambrig@redhat.com>
    >Tested-by: Dan Lambright <dlambrig@redhat.com>
    
    Change-Id: Id354d0969dc7665f082c4d95a423e087878cdb68
    BUG: 1269125
    Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
    Reviewed-on: http://review.gluster.org/12322
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Dan Lambright <dlambrig@redhat.com>
    Tested-by: Dan Lambright <dlambrig@redhat.com>
Comment 5 Raghavendra Talur 2015-11-17 00:58:48 EST
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.6, please open a new bug report.

glusterfs-3.7.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-November/024359.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.