Bug 1263100 - Data Tiering: Tiering related information is not displayed in gluster volume status xml output
Data Tiering: Tiering related information is not displayed in gluster volume ...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
mainline
Unspecified Unspecified
urgent Severity low
: ---
: ---
Assigned To: hari gowtham
bugs@gluster.org
: Reopened
Depends On: 1258347
Blocks: 1260923
  Show dependency treegraph
 
Reported: 2015-09-15 02:55 EDT by hari gowtham
Modified: 2016-06-16 09:36 EDT (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1258347
Environment:
Last Closed: 2016-06-16 09:36:54 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description hari gowtham 2015-09-15 02:55:04 EDT
+++ This bug was initially created as a clone of Bug #1258347 +++

Description of problem:
Tiering related information is not displayed in gluster volume status xml output. It would be good if the information is displayed in xml output for the automation purpose.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a volume.
2. Attach tier bricks.
3. Execute "gluster volume status --xml"

Actual results:
Tiering related information is not displayed in gluster volume status xml output

Expected results:
Tiering related information should be displayed in gluster volume status xml output

Additional info:
[root@node31 ~]# gluster volume status
Status of volume: testvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.46.51:/bricks/brick0/testvol_ti
er1                                         49159     0          Y       6272 
Brick 10.70.47.76:/bricks/brick1/testvol_ti
er0                                         49168     0          Y       20069
Cold Bricks:
Brick 10.70.47.76:/bricks/brick0/testvol_br
ick0                                        49167     0          Y       19975
NFS Server on localhost                     2049      0          Y       20090
NFS Server on 10.70.46.51                   2049      0          Y       6293 
 
Task Status of Volume testvol
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : bc9c2ca3-0d8e-4096-8fbb-25c61323218b
Status               : in progress         
 
[root@node31 ~]# gluster volume status --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr>(null)</opErrstr>
  <volStatus>
    <volumes>
      <volume>
        <volName>testvol</volName>
        <nodeCount>5</nodeCount>
        <node>
          <hostname>10.70.46.51</hostname>
          <path>/bricks/brick0/testvol_tier1</path>
          <peerid>9d77138d-ce50-4fdd-9dad-6c4efbd391e7</peerid>
          <status>1</status>
          <port>49159</port>
          <ports>
            <tcp>49159</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>6272</pid>
        </node>
        <node>
          <hostname>10.70.47.76</hostname>
          <path>/bricks/brick1/testvol_tier0</path>
          <peerid>261b213b-a9f6-4fb6-8313-11e7eba47258</peerid>
          <status>1</status>
          <port>49168</port>
          <ports>
            <tcp>49168</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>20069</pid>
        </node>
        <node>
          <hostname>10.70.47.76</hostname>
          <path>/bricks/brick0/testvol_brick0</path>
          <peerid>261b213b-a9f6-4fb6-8313-11e7eba47258</peerid>
          <status>1</status>
          <port>49167</port>
          <ports>
            <tcp>49167</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>19975</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <peerid>261b213b-a9f6-4fb6-8313-11e7eba47258</peerid>
          <status>1</status>
          <port>2049</port>
          <ports>
            <tcp>2049</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>20090</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>10.70.46.51</path>
          <peerid>9d77138d-ce50-4fdd-9dad-6c4efbd391e7</peerid>
          <status>1</status>
          <port>2049</port>
          <ports>
            <tcp>2049</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>6293</pid>
        </node>
        <tasks>
          <task>
            <type>Rebalance</type>
            <id>bc9c2ca3-0d8e-4096-8fbb-25c61323218b</id>
            <status>1</status>
            <statusStr>in progress</statusStr>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>
[root@node31 ~]#

--- Additional comment from nchilaka on 2015-08-31 07:59:08 EDT ---

Hi Dan,
We need this fixed with the highest priority to help us continue with automation.
Else our automation may be blocked

--- Additional comment from Mohammed Rafi KC on 2015-09-01 08:29:56 EDT ---
Comment 1 Vijay Bellur 2015-09-15 02:56:25 EDT
REVIEW: http://review.gluster.org/12176 (Tier/cli: tier related information in volume status command) posted (#1) for review on master by hari gowtham (hari.gowtham005@gmail.com)
Comment 2 Vijay Bellur 2015-09-18 09:45:14 EDT
COMMIT: http://review.gluster.org/12176 committed in master by Dan Lambright (dlambrig@redhat.com) 
------
commit cb8db8dfce0394e30cb25983a402de7beaa9c63f
Author: hari gowtham <hgowtham@redhat.com>
Date:   Tue Sep 15 11:07:03 2015 +0530

    Tier/cli: tier related information in volume status command
    
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <cliOutput>
      <opRet>0</opRet>
      <opErrno>0</opErrno>
      <opErrstr/>
      <volStatus>
        <volumes>
          <volume>
            <volName>v1</volName>
            <nodeCount>5</nodeCount>
            <hotBrick>
              <node>
                <hostname>10.70.42.203</hostname>
                <path>/data/gluster/tier/hbr1</path>
                <peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid>
                <status>1</status>
                <port>49154</port>
                <ports>
                  <tcp>49154</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>6535</pid>
              </node>
            </hotBrick>
            <coldBrick>
              <node>
                <hostname>10.70.42.203</hostname>
                <path>/data/gluster/tier/cb1</path>
                <peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid>
                <status>1</status>
                <port>49152</port>
                <ports>
                  <tcp>49152</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>6530</pid>
              </node>
            </coldBrick>
            <coldBrick>
              <node>
                <hostname>NFS Server</hostname>
                <path>10.70.42.203</path>
                <peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid>
                <status>1</status>
                <port>2049</port>
                <ports>
                  <tcp>2049</tcp>
                  <rdma>N/A</rdma>
                </ports>
                <pid>6519</pid>
              </node>
            </coldBrick>
            <tasks>
              <task>
                <type>Rebalance</type>
                <id>8da729f2-f1b2-4f55-9945-472130be93f7</id>
                <status>4</status>
                <statusStr>failed</statusStr>
              </task>
            </tasks>
          </volume>
            <tasks/>
          </volume>
        </volumes>
      </volStatus>
    </cliOutput>
    
    Change-Id: Idfdbce47d03ee2cdbf407c57159fd37a2900ad2c
    BUG: 1263100
    Signed-off-by: hari gowtham <hgowtham@redhat.com>
    Reviewed-on: http://review.gluster.org/12176
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Dan Lambright <dlambrig@redhat.com>
    Tested-by: Dan Lambright <dlambrig@redhat.com>
Comment 3 Niels de Vos 2016-06-16 09:36:54 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.