Bug 1325821 - gluster snap status xml output shows incorrect details when the snapshots are in deactivated state
Summary: gluster snap status xml output shows incorrect details when the snapshots are...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: RHGS 3.2.0
Assignee: Avra Sengupta
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On:
Blocks: 1325831 1351522 1369363 1369372
TreeView+ depends on / blocked
 
Reported: 2016-04-11 10:07 UTC by Arthy Loganathan
Modified: 2017-03-23 05:28 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8.4-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1325831 (view as bug list)
Environment:
Last Closed: 2017-03-23 05:28:24 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Arthy Loganathan 2016-04-11 10:07:43 UTC
Description of problem:
gluster snap status xml output shows incorrect details when the snapshots are in deactivated state

Version-Release number of selected component (if applicable):
glusterfs-server-3.7.9-1.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Create a snapshot
2. Run "gluster snap status --xml"

Actual results:
gluster snap status xml output shows incorrect details

Expected results:
gluster snap status xml output should show correct information

Additional info:

CLI Output:

[root@node94 ~]# gluster snap status snap1

Snap Name : snap1
Snap UUID : a322d93a-2732-447d-ab88-b943fa402fd2

	Brick Path        :   10.70.47.11:/run/gluster/snaps/2c790e6132e447e79168d9708d4abfe7/brick1/testvol_brick0
	Volume Group      :   RHS_vg0
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   3.52
	LV Size           :   9.95g


	Brick Path        :   10.70.47.16:/run/gluster/snaps/2c790e6132e447e79168d9708d4abfe7/brick2/testvol_brick1
	Volume Group      :   RHS_vg0
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   3.52
	LV Size           :   9.95g


	Brick Path        :   10.70.47.152:/run/gluster/snaps/2c790e6132e447e79168d9708d4abfe7/brick3/testvol_brick2
	Volume Group      :   RHS_vg0
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   3.51
	LV Size           :   9.95g


	Brick Path        :   10.70.46.52:/run/gluster/snaps/2c790e6132e447e79168d9708d4abfe7/brick4/testvol_brick3
	Volume Group      :   RHS_vg0
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   3.54
	LV Size           :   9.95g

Xml Output:

[root@node94 ~]# gluster snap status --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <snapStatus>
    <snapshots>
      <snapshot>
        <name>snap1</name>
        <uuid>a322d93a-2732-447d-ab88-b943fa402fd2</uuid>
        <volCount>1</volCount>
        <volume>
          <brickCount>4</brickCount>
          <brick>
            <path>10.70.47.11:/run/gluster/snaps/2c790e6132e447e79168d9708d4abfe7/brick1/testvol_brick0</path>
            <volumeGroup>RHS_vg0</volumeGroup>
          </brick>
        </volume>
      </snapshot>
    </snapshots>
  </snapStatus>
</cliOutput>

Comment 2 Arthy Loganathan 2016-04-11 11:20:43 UTC
Also, xml output is not generated when gluster snap status is executed with snap name.

gluster snap status snap1 --xml

Comment 3 Atin Mukherjee 2016-08-09 04:25:36 UTC
Upstream mainline patch http://review.gluster.org/14018 posted for review.

Comment 5 Atin Mukherjee 2016-09-17 14:55:43 UTC
Upstream mainline : http://review.gluster.org/14018
Upstream 3.8 : http://review.gluster.org/15291

And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.

Comment 8 Anil Shah 2016-10-20 05:53:31 UTC
[root@rhs-client46 core]# gluster snapshot status snap1 --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <snapStatus>
    <snapshots>
      <snapshot>
        <name>snap1</name>
        <uuid>fa6a58c8-a6c5-4819-b432-3f63b7be4958</uuid>
        <volCount>1</volCount>
        <volume>
          <brickCount>6</brickCount>
          <brick>
            <path>10.70.36.70:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick1/b1</path>
            <volumeGroup>RHS_vg1</volumeGroup>
            <brick_running>No</brick_running>
            <pid>N/A</pid>            
<data_percentage>4.60</data_percentage>
            <lvSize>199.00g</lvSize>
          </brick>
          <brick>
            <path>10.70.36.71:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick2/b2</path>
            <volumeGroup>RHS_vg1</volumeGroup>
            <brick_running>No</brick_running>
            <pid>N/A</pid>
            <data_percentage>4.59</data_percentage>
            <lvSize>199.00g</lvSize>
          </brick>
          <brick>
            <path>10.70.36.46:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick3/b3</path>
            <volumeGroup>RHS_vg1</volumeGroup>
            <brick_running>No</brick_running>
            <pid>N/A</pid>
            <data_percentage>0.05</data_percentage>
            <lvSize>1.80t</lvSize>
          </brick>
          <brick>
            <path>10.70.44.7:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick4/b4</path>
            <volumeGroup>RHS_vg1</volumeGroup>
            <brick_running>No</brick_running>
            <pid>N/A</pid>
            <data_percentage>1.45</data_percentage>
            <lvSize>926.85g</lvSize>
          </brick>
          <brick>
            <path>10.70.36.70:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick5/b5</path>
            <volumeGroup>RHS_vg2</volumeGroup>
            <brick_running>No</brick_running>
            <pid>N/A</pid>
            <data_percentage>6.09</data_percentage>
            <lvSize>199.00g</lvSize>
          </brick>
          <brick>
            <path>10.70.36.71:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick6/b6</path>
            <volumeGroup>RHS_vg2</volumeGroup>
            <brick_running>No</brick_running>
            <pid>N/A</pid>
            <data_percentage>0.06</data_percentage>
            <lvSize>199.00g</lvSize>
          </brick>
        </volume>
      </snapshot>
    </snapshots>
  </snapStatus>
</cliOutput>

====================================================
[root@rhs-client46 core]# gluster snapshot status --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <snapStatus>
    <snapshots>
      <snapshot>
        <name>snap1</name>
        <uuid>fa6a58c8-a6c5-4819-b432-3f63b7be4958</uuid>
        <volCount>1</volCount>
        <volume>
          <brickCount>6</brickCount>
          <brick>
            <path>10.70.36.70:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick1/b1</path>
            <volumeGroup>RHS_vg1</volumeGroup>
            <brick_running>Yes</brick_running>
            <pid>9228</pid>
            <data_percentage>4.60</data_percentage>
            <lvSize>199.00g</lvSize>
          </brick>
          <brick>
            <path>10.70.36.71:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick2/b2</path>
            <volumeGroup>RHS_vg1</volumeGroup>
            <brick_running>Yes</brick_running>
            <pid>3887</pid>
            <data_percentage>4.59</data_percentage>
            <lvSize>199.00g</lvSize>
          </brick>
          <brick>
            <path>10.70.36.46:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick3/b3</path>
            <volumeGroup>RHS_vg1</volumeGroup>
            <brick_running>Yes</brick_running>
            <pid>19396</pid>
            <data_percentage>0.05</data_percentage>
            <lvSize>1.80t</lvSize>
          </brick>
          <brick>
            <path>10.70.44.7:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick4/b4</path>
            <volumeGroup>RHS_vg1</volumeGroup>
            <brick_running>Yes</brick_running>
            <pid>31636</pid>
            <data_percentage>1.45</data_percentage>
            <lvSize>926.85g</lvSize>
          </brick>
          <brick>
            <path>10.70.36.70:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick5/b5</path>
            <volumeGroup>RHS_vg2</volumeGroup>
            <brick_running>Yes</brick_running>
            <pid>9250</pid>
            <data_percentage>6.09</data_percentage>
            <lvSize>199.00g</lvSize>
          </brick>
          <brick>
            <path>10.70.36.71:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick6/b6</path>
            <volumeGroup>RHS_vg2</volumeGroup>
            <brick_running>Yes</brick_running>
            <pid>3909</pid>
            <data_percentage>0.06</data_percentage>
            <lvSize>199.00g</lvSize>
          </brick>
        </volume>
      </snapshot>
      <snapshot>
        <name>snap2</name>
        <uuid>15d24607-12ff-4006-8703-cd30f32a306f</uuid>
        <volCount>1</volCount>
        <volume>
          <brickCount>6</brickCount>
          <brick>
            <path>10.70.36.70:/run/gluster/snaps/0884448eca204df0b766b308b289dab1/brick1/b1</path>
            <volumeGroup>RHS_vg1</volumeGroup>
            <brick_running>No</brick_running>
            <pid>N/A</pid>
            <data_percentage>4.60</data_percentage>
            <lvSize>199.00g</lvSize>
          </brick>
          <brick>
            <path>10.70.36.71:/run/gluster/snaps/0884448eca204df0b766b308b289dab1/brick2/b2</path>
            <volumeGroup>RHS_vg1</volumeGroup>
            <brick_running>No</brick_running>
            <pid>N/A</pid>
            <data_percentage>4.59</data_percentage>
            <lvSize>199.00g</lvSize>
          </brick>
          <brick>
            <path>10.70.36.46:/run/gluster/snaps/0884448eca204df0b766b308b289dab1/brick3/b3</path>
            <volumeGroup>RHS_vg1</volumeGroup>
            <brick_running>No</brick_running>
            <pid>N/A</pid>
            <data_percentage>0.05</data_percentage>
            <lvSize>1.80t</lvSize>
          </brick>
          <brick>
            <path>10.70.44.7:/run/gluster/snaps/0884448eca204df0b766b308b289dab1/brick4/b4</path>
            <volumeGroup>RHS_vg1</volumeGroup>
            <brick_running>No</brick_running>
            <pid>N/A</pid>
            <data_percentage>1.45</data_percentage>
            <lvSize>926.85g</lvSize>
          </brick>
          <brick>
            <path>10.70.36.70:/run/gluster/snaps/0884448eca204df0b766b308b289dab1/brick5/b5</path>
            <volumeGroup>RHS_vg2</volumeGroup>
            <brick_running>No</brick_running>
            <pid>N/A</pid>
            <data_percentage>6.09</data_percentage>
            <lvSize>199.00g</lvSize>
          </brick>
          <brick>
            <path>10.70.36.71:/run/gluster/snaps/0884448eca204df0b766b308b289dab1/brick6/b6</path>
            <volumeGroup>RHS_vg2</volumeGroup>
            <brick_running>No</brick_running>
            <pid>N/A</pid>
            <data_percentage>0.06</data_percentage>
            <lvSize>199.00g</lvSize>
          </brick>
        </volume>
      </snapshot>
    </snapshots>
  </snapStatus>
</cliOutput>


Bug verified on build glusterfs-3.8.4-2.el7rhgs.x86_64

Comment 10 errata-xmlrpc 2017-03-23 05:28:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.