Bug 1278073 - The "--xml" option doesn't work for some commands of "snapshot" feature
Summary: The "--xml" option doesn't work for some commands of "snapshot" feature
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: 3.7.5
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: rjoseph
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-11-04 16:20 UTC by Hsiu-Chang Chen
Modified: 2017-03-08 11:00 UTC (History)
2 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-03-08 11:00:58 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Hsiu-Chang Chen 2015-11-04 16:20:54 UTC
Description of problem:
1. When I try to get the snapshot status in xml format, the "--xml" option doesn't work for command "gluster snapshot status"
2. Show snapshot status for specified snapshot in xml format doesn't show anything.

Version-Release number of selected component (if applicable):
GlusterFS 3.7.5

How reproducible:

Steps to Reproduce:
1. Create a volume with multiple bricks
2. Take several snapshots for this volume
3. Run command "gluster snapshot status --xml"

Actual results:

I have a volume created by 4 bricks and already took 2 snapshots. 
When I run command "gluster snapshot status", I got:

Snap Name : snap1_GMT-2015.11.03-02.51.11
Snap UUID : 95dca986-7492-42f8-8a20-133e186dbdf7

        Brick Path        :   gfs-m1:/run/gluster/snaps/dd45a201cd1c4788b05b57b5a652aeb8/brick1/fs
        Volume Group      :   vg1
        Brick Running     :   No
        Brick PID         :   N/A
        Data Percentage   :   0.31
        LV Size           :   100.00g


        Brick Path        :   gfs-m1:/run/gluster/snaps/dd45a201cd1c4788b05b57b5a652aeb8/brick2/fs
        Volume Group      :   vg1
        Brick Running     :   No
        Brick PID         :   N/A
        Data Percentage   :   0.31
        LV Size           :   100.00g


        Brick Path        :   gfs-m1:/run/gluster/snaps/dd45a201cd1c4788b05b57b5a652aeb8/brick3/fs
        Volume Group      :   vg1
        Brick Running     :   No
        Brick PID         :   N/A
        Data Percentage   :   0.31
        LV Size           :   100.00g


        Brick Path        :   gfs-m1:/run/gluster/snaps/dd45a201cd1c4788b05b57b5a652aeb8/brick4/fs
        Volume Group      :   vg1
        Brick Running     :   No
        Brick PID         :   N/A
        Data Percentage   :   0.31
        LV Size           :   100.00g


Snap Name : snap2_GMT-2015.11.03-02.51.57
Snap UUID : 9b31a4fd-6e04-460d-9aaf-b16d1bfed8a2

        Brick Path        :   gfs-m1:/run/gluster/snaps/3aaf2d06b3034799bab05ba1f1479e9c/brick1/fs
        Volume Group      :   vg1
        Brick Running     :   No
        Brick PID         :   N/A
        Data Percentage   :   0.31
        LV Size           :   100.00g


        Brick Path        :   gfs-m1:/run/gluster/snaps/3aaf2d06b3034799bab05ba1f1479e9c/brick2/fs
        Volume Group      :   vg1
        Brick Running     :   No
        Brick PID         :   N/A
        Data Percentage   :   0.31
        LV Size           :   100.00g


        Brick Path        :   gfs-m1:/run/gluster/snaps/3aaf2d06b3034799bab05ba1f1479e9c/brick3/fs
        Volume Group      :   vg1
        Brick Running     :   No
        Brick PID         :   N/A
        Data Percentage   :   0.31
        LV Size           :   100.00g


        Brick Path        :   gfs-m1:/run/gluster/snaps/3aaf2d06b3034799bab05ba1f1479e9c/brick4/fs
        Volume Group      :   vg1
        Brick Running     :   No
        Brick PID         :   N/A
        Data Percentage   :   0.31
        LV Size           :   100.00g

But when I ran the same command with "--xml" option, I got:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <snapStatus>
    <snapshots>
      <snapshot>
        <name>snap1_GMT-2015.11.03-02.51.11</name>
        <uuid>95dca986-7492-42f8-8a20-133e186dbdf7</uuid>
        <volCount>1</volCount>
        <volume>
          <brickCount>4</brickCount>
          <brick>
            <path>gfs-m1:/run/gluster/snaps/dd45a201cd1c4788b05b57b5a652aeb8/brick1/fs</path>
            <volumeGroup>vg1</volumeGroup>
            <snapshot>
              <name>snap2_GMT-2015.11.03-02.51.57</name>
              <uuid>9b31a4fd-6e04-460d-9aaf-b16d1bfed8a2</uuid>
              <volCount>1</volCount>
              <volume>
                <brickCount>4</brickCount>
                <brick>
                  <path>gfs-m1:/run/gluster/snaps/3aaf2d06b3034799bab05ba1f1479e9c/brick1/fs</path>
                  <volumeGroup>vg1</volumeGroup>
                </brick>
              </volume>
            </snapshot>
          </brick>
        </volume>
      </snapshot>
    </snapshots>
  </snapStatus>
</cliOutput>

which put the status of snap2 in the snap1, and only show one brick information.

Expected results:

Additional info:

Comment 1 Hsiu-Chang Chen 2015-11-05 01:22:39 UTC
The "snapshot clone" also has problems with "--xml"
When clone success, it returns nothing if has "--xml" option.
But it returns correct results if clone fail.

Comment 2 Hsiu-Chang Chen 2015-11-05 01:41:15 UTC
(In reply to Hsiu-Chang Chen from comment #1)
> The "snapshot clone" also has problems with "--xml"
> When clone success, it returns nothing if has "--xml" option.
> But it returns correct results if clone fail.

It's the results I made:

root@localhost [~]# gluster --xml snapshot clone clone1 snap1
root@localhost [~]# gluster --xml snapshot clone clone1 snap1
2<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>-1</opRet>
  <opErrno>30811</opErrno>
  <opErrstr>Volume with name:clone1 already exists</opErrstr>
</cliOutput>

The first command is successful but doesn't show anything.
The second command is fail and displays the error message.

Comment 3 Kaushal 2017-03-08 11:00:58 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.


Note You need to log in before you can comment on or make changes to this bug.