Bug 1258347 - Data Tiering: Tiering related information is not displayed in gluster volume status xml output
Summary: Data Tiering: Tiering related information is not displayed in gluster volume ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: 3.7.5
Hardware: Unspecified
OS: Unspecified
urgent
low
Target Milestone: ---
Assignee: hari gowtham
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: 1260923 1263100
TreeView+ depends on / blocked
 
Reported: 2015-08-31 07:03 UTC by Arthy Loganathan
Modified: 2015-10-30 17:32 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.7.5
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1263100 (view as bug list)
Environment:
Last Closed: 2015-10-14 10:28:44 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Arthy Loganathan 2015-08-31 07:03:35 UTC
Description of problem:
Tiering related information is not displayed in gluster volume status xml output. It would be good if the information is displayed in xml output for the automation purpose.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a volume.
2. Attach tier bricks.
3. Execute "gluster volume status --xml"

Actual results:
Tiering related information is not displayed in gluster volume status xml output

Expected results:
Tiering related information should be displayed in gluster volume status xml output

Additional info:
[root@node31 ~]# gluster volume status
Status of volume: testvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.46.51:/bricks/brick0/testvol_ti
er1                                         49159     0          Y       6272 
Brick 10.70.47.76:/bricks/brick1/testvol_ti
er0                                         49168     0          Y       20069
Cold Bricks:
Brick 10.70.47.76:/bricks/brick0/testvol_br
ick0                                        49167     0          Y       19975
NFS Server on localhost                     2049      0          Y       20090
NFS Server on 10.70.46.51                   2049      0          Y       6293 
 
Task Status of Volume testvol
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : bc9c2ca3-0d8e-4096-8fbb-25c61323218b
Status               : in progress         
 
[root@node31 ~]# gluster volume status --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr>(null)</opErrstr>
  <volStatus>
    <volumes>
      <volume>
        <volName>testvol</volName>
        <nodeCount>5</nodeCount>
        <node>
          <hostname>10.70.46.51</hostname>
          <path>/bricks/brick0/testvol_tier1</path>
          <peerid>9d77138d-ce50-4fdd-9dad-6c4efbd391e7</peerid>
          <status>1</status>
          <port>49159</port>
          <ports>
            <tcp>49159</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>6272</pid>
        </node>
        <node>
          <hostname>10.70.47.76</hostname>
          <path>/bricks/brick1/testvol_tier0</path>
          <peerid>261b213b-a9f6-4fb6-8313-11e7eba47258</peerid>
          <status>1</status>
          <port>49168</port>
          <ports>
            <tcp>49168</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>20069</pid>
        </node>
        <node>
          <hostname>10.70.47.76</hostname>
          <path>/bricks/brick0/testvol_brick0</path>
          <peerid>261b213b-a9f6-4fb6-8313-11e7eba47258</peerid>
          <status>1</status>
          <port>49167</port>
          <ports>
            <tcp>49167</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>19975</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <peerid>261b213b-a9f6-4fb6-8313-11e7eba47258</peerid>
          <status>1</status>
          <port>2049</port>
          <ports>
            <tcp>2049</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>20090</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>10.70.46.51</path>
          <peerid>9d77138d-ce50-4fdd-9dad-6c4efbd391e7</peerid>
          <status>1</status>
          <port>2049</port>
          <ports>
            <tcp>2049</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>6293</pid>
        </node>
        <tasks>
          <task>
            <type>Rebalance</type>
            <id>bc9c2ca3-0d8e-4096-8fbb-25c61323218b</id>
            <status>1</status>
            <statusStr>in progress</statusStr>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>
[root@node31 ~]#

Comment 1 Nag Pavan Chilakam 2015-08-31 11:59:08 UTC
Hi Dan,
We need this fixed with the highest priority to help us continue with automation.
Else our automation may be blocked

Comment 2 Mohammed Rafi KC 2015-09-01 12:29:56 UTC

*** This bug has been marked as a duplicate of bug 1258338 ***

Comment 3 Pranith Kumar K 2015-10-14 10:28:44 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report.

glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 4 Pranith Kumar K 2015-10-14 10:37:59 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report.

glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.