Bug 1258338 - Data Tiering: Tiering related information is not displayed in gluster volume info xml output
Data Tiering: Tiering related information is not displayed in gluster volume ...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
3.7.5
All All
urgent Severity high
: ---
: ---
Assigned To: hari gowtham
bugs@gluster.org
: Triaged
Depends On:
Blocks: 1262195 1260923
  Show dependency treegraph
 
Reported: 2015-08-31 02:36 EDT by Arthy Loganathan
Modified: 2015-10-30 13:32 EDT (History)
5 users (show)

See Also:
Fixed In Version: glusterfs-3.7.5
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1262195 (view as bug list)
Environment:
Last Closed: 2015-10-14 06:28:00 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Arthy Loganathan 2015-08-31 02:36:24 EDT
Description of problem:
Tiering related information is not displayed in gluster volume info xml output. It would be good if the information is displayed in xml output for the automation purpose.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a volume.
2. Attach tier bricks.
3. Execute "gluster volume info --xml"

Actual results:
Tiering related information is not displayed in gluster volume info xml output

Expected results:
Tiering related information should be displayed in gluster volume info xml output

Additional info:

[root@node31 ~]# gluster volume info
 
Volume Name: testvol
Type: Tier
Volume ID: 496dfa0d-a370-4dd9-84b5-4048e91aef71
Status: Started
Number of Bricks: 3
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.51:/bricks/brick0/testvol_tier1
Brick2: 10.70.47.76:/bricks/brick1/testvol_tier0
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 1
Brick3: 10.70.47.76:/bricks/brick0/testvol_brick0
Options Reconfigured:
performance.readdir-ahead: on
[root@node31 ~]# 
[root@node31 ~]# gluster volume info --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volInfo>
    <volumes>
      <volume>
        <name>testvol</name>
        <id>496dfa0d-a370-4dd9-84b5-4048e91aef71</id>
        <status>1</status>
        <statusStr>Started</statusStr>
        <brickCount>3</brickCount>
        <distCount>1</distCount>
        <stripeCount>1</stripeCount>
        <replicaCount>1</replicaCount>
        <disperseCount>0</disperseCount>
        <redundancyCount>0</redundancyCount>
        <type>5</type>
        <typeStr>Tier</typeStr>
        <transport>0</transport>
        <xlators/>
        <bricks>
          <brick uuid="9d77138d-ce50-4fdd-9dad-6c4efbd391e7">10.70.46.51:/bricks/brick0/testvol_tier1<name>10.70.46.51:/bricks/brick0/testvol_tier1</name><hostUuid>9d77138d-ce50-4fdd-9dad-6c4efbd391e7</hostUuid></brick>
          <brick uuid="261b213b-a9f6-4fb6-8313-11e7eba47258">10.70.47.76:/bricks/brick1/testvol_tier0<name>10.70.47.76:/bricks/brick1/testvol_tier0</name><hostUuid>261b213b-a9f6-4fb6-8313-11e7eba47258</hostUuid></brick>
          <brick uuid="261b213b-a9f6-4fb6-8313-11e7eba47258">10.70.47.76:/bricks/brick0/testvol_brick0<name>10.70.47.76:/bricks/brick0/testvol_brick0</name><hostUuid>261b213b-a9f6-4fb6-8313-11e7eba47258</hostUuid></brick>
        </bricks>
        <optCount>1</optCount>
        <options>
          <option>
            <name>performance.readdir-ahead</name>
            <value>on</value>
          </option>
        </options>
      </volume>
      <count>1</count>
    </volumes>
  </volInfo>
</cliOutput>
[root@node31 ~]#
Comment 1 nchilaka 2015-08-31 07:58:42 EDT
Hi Dan,
We need this fixed with the highest priority to help us continue with automation.
Else our automation may be blocked
Comment 2 Mohammed Rafi KC 2015-09-01 08:29:56 EDT
*** Bug 1258347 has been marked as a duplicate of this bug. ***
Comment 3 Pranith Kumar K 2015-10-14 06:28:00 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report.

glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
Comment 4 Pranith Kumar K 2015-10-14 06:37:35 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report.

glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.