Bug 1258338 - Data Tiering: Tiering related information is not displayed in gluster volume info xml output
Summary: Data Tiering: Tiering related information is not displayed in gluster volume ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: 3.7.5
Hardware: All
OS: All
urgent
high
Target Milestone: ---
Assignee: hari gowtham
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: 1260923 1262195
TreeView+ depends on / blocked
 
Reported: 2015-08-31 06:36 UTC by Arthy Loganathan
Modified: 2015-10-30 17:32 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.7.5
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1262195 (view as bug list)
Environment:
Last Closed: 2015-10-14 10:28:00 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Arthy Loganathan 2015-08-31 06:36:24 UTC
Description of problem:
Tiering related information is not displayed in gluster volume info xml output. It would be good if the information is displayed in xml output for the automation purpose.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a volume.
2. Attach tier bricks.
3. Execute "gluster volume info --xml"

Actual results:
Tiering related information is not displayed in gluster volume info xml output

Expected results:
Tiering related information should be displayed in gluster volume info xml output

Additional info:

[root@node31 ~]# gluster volume info
 
Volume Name: testvol
Type: Tier
Volume ID: 496dfa0d-a370-4dd9-84b5-4048e91aef71
Status: Started
Number of Bricks: 3
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.51:/bricks/brick0/testvol_tier1
Brick2: 10.70.47.76:/bricks/brick1/testvol_tier0
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 1
Brick3: 10.70.47.76:/bricks/brick0/testvol_brick0
Options Reconfigured:
performance.readdir-ahead: on
[root@node31 ~]# 
[root@node31 ~]# gluster volume info --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volInfo>
    <volumes>
      <volume>
        <name>testvol</name>
        <id>496dfa0d-a370-4dd9-84b5-4048e91aef71</id>
        <status>1</status>
        <statusStr>Started</statusStr>
        <brickCount>3</brickCount>
        <distCount>1</distCount>
        <stripeCount>1</stripeCount>
        <replicaCount>1</replicaCount>
        <disperseCount>0</disperseCount>
        <redundancyCount>0</redundancyCount>
        <type>5</type>
        <typeStr>Tier</typeStr>
        <transport>0</transport>
        <xlators/>
        <bricks>
          <brick uuid="9d77138d-ce50-4fdd-9dad-6c4efbd391e7">10.70.46.51:/bricks/brick0/testvol_tier1<name>10.70.46.51:/bricks/brick0/testvol_tier1</name><hostUuid>9d77138d-ce50-4fdd-9dad-6c4efbd391e7</hostUuid></brick>
          <brick uuid="261b213b-a9f6-4fb6-8313-11e7eba47258">10.70.47.76:/bricks/brick1/testvol_tier0<name>10.70.47.76:/bricks/brick1/testvol_tier0</name><hostUuid>261b213b-a9f6-4fb6-8313-11e7eba47258</hostUuid></brick>
          <brick uuid="261b213b-a9f6-4fb6-8313-11e7eba47258">10.70.47.76:/bricks/brick0/testvol_brick0<name>10.70.47.76:/bricks/brick0/testvol_brick0</name><hostUuid>261b213b-a9f6-4fb6-8313-11e7eba47258</hostUuid></brick>
        </bricks>
        <optCount>1</optCount>
        <options>
          <option>
            <name>performance.readdir-ahead</name>
            <value>on</value>
          </option>
        </options>
      </volume>
      <count>1</count>
    </volumes>
  </volInfo>
</cliOutput>
[root@node31 ~]#

Comment 1 Nag Pavan Chilakam 2015-08-31 11:58:42 UTC
Hi Dan,
We need this fixed with the highest priority to help us continue with automation.
Else our automation may be blocked

Comment 2 Mohammed Rafi KC 2015-09-01 12:29:56 UTC
*** Bug 1258347 has been marked as a duplicate of this bug. ***

Comment 3 Pranith Kumar K 2015-10-14 10:28:00 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report.

glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 4 Pranith Kumar K 2015-10-14 10:37:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report.

glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.