Bug 1262195 - Data Tiering: Tiering related information is not displayed in gluster volume info xml output
Summary: Data Tiering: Tiering related information is not displayed in gluster volume ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: mainline
Hardware: All
OS: All
urgent
high
Target Milestone: ---
Assignee: hari gowtham
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On: 1258338
Blocks: 1260923
TreeView+ depends on / blocked
 
Reported: 2015-09-11 06:52 UTC by hari gowtham
Modified: 2018-10-08 09:53 UTC (History)
4 users (show)

Fixed In Version: glusterfs-4.1.4
Doc Type: Bug Fix
Doc Text:
Clone Of: 1258338
Environment:
Last Closed: 2018-10-08 09:53:38 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description hari gowtham 2015-09-11 06:52:35 UTC
+++ This bug was initially created as a clone of Bug #1258338 +++

Description of problem:
Tiering related information is not displayed in gluster volume info xml output. It would be good if the information is displayed in xml output for the automation purpose.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a volume.
2. Attach tier bricks.
3. Execute "gluster volume info --xml"

Actual results:
Tiering related information is not displayed in gluster volume info xml output

Expected results:
Tiering related information should be displayed in gluster volume info xml output

Additional info:

[root@node31 ~]# gluster volume info
 
Volume Name: testvol
Type: Tier
Volume ID: 496dfa0d-a370-4dd9-84b5-4048e91aef71
Status: Started
Number of Bricks: 3
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.51:/bricks/brick0/testvol_tier1
Brick2: 10.70.47.76:/bricks/brick1/testvol_tier0
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 1
Brick3: 10.70.47.76:/bricks/brick0/testvol_brick0
Options Reconfigured:
performance.readdir-ahead: on
[root@node31 ~]# 
[root@node31 ~]# gluster volume info --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volInfo>
    <volumes>
      <volume>
        <name>testvol</name>
        <id>496dfa0d-a370-4dd9-84b5-4048e91aef71</id>
        <status>1</status>
        <statusStr>Started</statusStr>
        <brickCount>3</brickCount>
        <distCount>1</distCount>
        <stripeCount>1</stripeCount>
        <replicaCount>1</replicaCount>
        <disperseCount>0</disperseCount>
        <redundancyCount>0</redundancyCount>
        <type>5</type>
        <typeStr>Tier</typeStr>
        <transport>0</transport>
        <xlators/>
        <bricks>
          <brick uuid="9d77138d-ce50-4fdd-9dad-6c4efbd391e7">10.70.46.51:/bricks/brick0/testvol_tier1<name>10.70.46.51:/bricks/brick0/testvol_tier1</name><hostUuid>9d77138d-ce50-4fdd-9dad-6c4efbd391e7</hostUuid></brick>
          <brick uuid="261b213b-a9f6-4fb6-8313-11e7eba47258">10.70.47.76:/bricks/brick1/testvol_tier0<name>10.70.47.76:/bricks/brick1/testvol_tier0</name><hostUuid>261b213b-a9f6-4fb6-8313-11e7eba47258</hostUuid></brick>
          <brick uuid="261b213b-a9f6-4fb6-8313-11e7eba47258">10.70.47.76:/bricks/brick0/testvol_brick0<name>10.70.47.76:/bricks/brick0/testvol_brick0</name><hostUuid>261b213b-a9f6-4fb6-8313-11e7eba47258</hostUuid></brick>
        </bricks>
        <optCount>1</optCount>
        <options>
          <option>
            <name>performance.readdir-ahead</name>
            <value>on</value>
          </option>
        </options>
      </volume>
      <count>1</count>
    </volumes>
  </volInfo>
</cliOutput>
[root@node31 ~]#

--- Additional comment from nchilaka on 2015-08-31 07:58:42 EDT ---

Hi Dan,
We need this fixed with the highest priority to help us continue with automation.
Else our automation may be blocked

--- Additional comment from Mohammed Rafi KC on 2015-09-01 08:29:56 EDT ---

Comment 1 Amar Tumballi 2018-10-08 09:53:38 UTC
This bug was ON_QA status, and on GlusterFS product in bugzilla, we don't have that as a valid status. We are closing it as 'CURRENT RELEASE ' to indicate the availability of the fix, please reopen if found again.


Note You need to log in before you can comment on or make changes to this bug.