Description of problem: gluster snap status xml output shows incorrect details when the snapshots are in deactivated state Version-Release number of selected component (if applicable): glusterfs-server-3.7.9-1.el7rhgs.x86_64 How reproducible: Always Steps to Reproduce: 1. Create a snapshot 2. Run "gluster snap status --xml" Actual results: gluster snap status xml output shows incorrect details Expected results: gluster snap status xml output should show correct information Additional info: CLI Output: [root@node94 ~]# gluster snap status snap1 Snap Name : snap1 Snap UUID : a322d93a-2732-447d-ab88-b943fa402fd2 Brick Path : 10.70.47.11:/run/gluster/snaps/2c790e6132e447e79168d9708d4abfe7/brick1/testvol_brick0 Volume Group : RHS_vg0 Brick Running : No Brick PID : N/A Data Percentage : 3.52 LV Size : 9.95g Brick Path : 10.70.47.16:/run/gluster/snaps/2c790e6132e447e79168d9708d4abfe7/brick2/testvol_brick1 Volume Group : RHS_vg0 Brick Running : No Brick PID : N/A Data Percentage : 3.52 LV Size : 9.95g Brick Path : 10.70.47.152:/run/gluster/snaps/2c790e6132e447e79168d9708d4abfe7/brick3/testvol_brick2 Volume Group : RHS_vg0 Brick Running : No Brick PID : N/A Data Percentage : 3.51 LV Size : 9.95g Brick Path : 10.70.46.52:/run/gluster/snaps/2c790e6132e447e79168d9708d4abfe7/brick4/testvol_brick3 Volume Group : RHS_vg0 Brick Running : No Brick PID : N/A Data Percentage : 3.54 LV Size : 9.95g Xml Output: [root@node94 ~]# gluster snap status --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <snapStatus> <snapshots> <snapshot> <name>snap1</name> <uuid>a322d93a-2732-447d-ab88-b943fa402fd2</uuid> <volCount>1</volCount> <volume> <brickCount>4</brickCount> <brick> <path>10.70.47.11:/run/gluster/snaps/2c790e6132e447e79168d9708d4abfe7/brick1/testvol_brick0</path> <volumeGroup>RHS_vg0</volumeGroup> </brick> </volume> </snapshot> </snapshots> </snapStatus> </cliOutput>
Also, xml output is not generated when gluster snap status is executed with snap name. gluster snap status snap1 --xml
Upstream mainline patch http://review.gluster.org/14018 posted for review.
Upstream mainline : http://review.gluster.org/14018 Upstream 3.8 : http://review.gluster.org/15291 And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.
[root@rhs-client46 core]# gluster snapshot status snap1 --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <snapStatus> <snapshots> <snapshot> <name>snap1</name> <uuid>fa6a58c8-a6c5-4819-b432-3f63b7be4958</uuid> <volCount>1</volCount> <volume> <brickCount>6</brickCount> <brick> <path>10.70.36.70:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick1/b1</path> <volumeGroup>RHS_vg1</volumeGroup> <brick_running>No</brick_running> <pid>N/A</pid> <data_percentage>4.60</data_percentage> <lvSize>199.00g</lvSize> </brick> <brick> <path>10.70.36.71:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick2/b2</path> <volumeGroup>RHS_vg1</volumeGroup> <brick_running>No</brick_running> <pid>N/A</pid> <data_percentage>4.59</data_percentage> <lvSize>199.00g</lvSize> </brick> <brick> <path>10.70.36.46:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick3/b3</path> <volumeGroup>RHS_vg1</volumeGroup> <brick_running>No</brick_running> <pid>N/A</pid> <data_percentage>0.05</data_percentage> <lvSize>1.80t</lvSize> </brick> <brick> <path>10.70.44.7:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick4/b4</path> <volumeGroup>RHS_vg1</volumeGroup> <brick_running>No</brick_running> <pid>N/A</pid> <data_percentage>1.45</data_percentage> <lvSize>926.85g</lvSize> </brick> <brick> <path>10.70.36.70:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick5/b5</path> <volumeGroup>RHS_vg2</volumeGroup> <brick_running>No</brick_running> <pid>N/A</pid> <data_percentage>6.09</data_percentage> <lvSize>199.00g</lvSize> </brick> <brick> <path>10.70.36.71:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick6/b6</path> <volumeGroup>RHS_vg2</volumeGroup> <brick_running>No</brick_running> <pid>N/A</pid> <data_percentage>0.06</data_percentage> <lvSize>199.00g</lvSize> </brick> </volume> </snapshot> </snapshots> </snapStatus> </cliOutput> ==================================================== [root@rhs-client46 core]# gluster snapshot status --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <snapStatus> <snapshots> <snapshot> <name>snap1</name> <uuid>fa6a58c8-a6c5-4819-b432-3f63b7be4958</uuid> <volCount>1</volCount> <volume> <brickCount>6</brickCount> <brick> <path>10.70.36.70:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick1/b1</path> <volumeGroup>RHS_vg1</volumeGroup> <brick_running>Yes</brick_running> <pid>9228</pid> <data_percentage>4.60</data_percentage> <lvSize>199.00g</lvSize> </brick> <brick> <path>10.70.36.71:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick2/b2</path> <volumeGroup>RHS_vg1</volumeGroup> <brick_running>Yes</brick_running> <pid>3887</pid> <data_percentage>4.59</data_percentage> <lvSize>199.00g</lvSize> </brick> <brick> <path>10.70.36.46:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick3/b3</path> <volumeGroup>RHS_vg1</volumeGroup> <brick_running>Yes</brick_running> <pid>19396</pid> <data_percentage>0.05</data_percentage> <lvSize>1.80t</lvSize> </brick> <brick> <path>10.70.44.7:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick4/b4</path> <volumeGroup>RHS_vg1</volumeGroup> <brick_running>Yes</brick_running> <pid>31636</pid> <data_percentage>1.45</data_percentage> <lvSize>926.85g</lvSize> </brick> <brick> <path>10.70.36.70:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick5/b5</path> <volumeGroup>RHS_vg2</volumeGroup> <brick_running>Yes</brick_running> <pid>9250</pid> <data_percentage>6.09</data_percentage> <lvSize>199.00g</lvSize> </brick> <brick> <path>10.70.36.71:/run/gluster/snaps/919d9eb8aaed4d18af5f236157d64104/brick6/b6</path> <volumeGroup>RHS_vg2</volumeGroup> <brick_running>Yes</brick_running> <pid>3909</pid> <data_percentage>0.06</data_percentage> <lvSize>199.00g</lvSize> </brick> </volume> </snapshot> <snapshot> <name>snap2</name> <uuid>15d24607-12ff-4006-8703-cd30f32a306f</uuid> <volCount>1</volCount> <volume> <brickCount>6</brickCount> <brick> <path>10.70.36.70:/run/gluster/snaps/0884448eca204df0b766b308b289dab1/brick1/b1</path> <volumeGroup>RHS_vg1</volumeGroup> <brick_running>No</brick_running> <pid>N/A</pid> <data_percentage>4.60</data_percentage> <lvSize>199.00g</lvSize> </brick> <brick> <path>10.70.36.71:/run/gluster/snaps/0884448eca204df0b766b308b289dab1/brick2/b2</path> <volumeGroup>RHS_vg1</volumeGroup> <brick_running>No</brick_running> <pid>N/A</pid> <data_percentage>4.59</data_percentage> <lvSize>199.00g</lvSize> </brick> <brick> <path>10.70.36.46:/run/gluster/snaps/0884448eca204df0b766b308b289dab1/brick3/b3</path> <volumeGroup>RHS_vg1</volumeGroup> <brick_running>No</brick_running> <pid>N/A</pid> <data_percentage>0.05</data_percentage> <lvSize>1.80t</lvSize> </brick> <brick> <path>10.70.44.7:/run/gluster/snaps/0884448eca204df0b766b308b289dab1/brick4/b4</path> <volumeGroup>RHS_vg1</volumeGroup> <brick_running>No</brick_running> <pid>N/A</pid> <data_percentage>1.45</data_percentage> <lvSize>926.85g</lvSize> </brick> <brick> <path>10.70.36.70:/run/gluster/snaps/0884448eca204df0b766b308b289dab1/brick5/b5</path> <volumeGroup>RHS_vg2</volumeGroup> <brick_running>No</brick_running> <pid>N/A</pid> <data_percentage>6.09</data_percentage> <lvSize>199.00g</lvSize> </brick> <brick> <path>10.70.36.71:/run/gluster/snaps/0884448eca204df0b766b308b289dab1/brick6/b6</path> <volumeGroup>RHS_vg2</volumeGroup> <brick_running>No</brick_running> <pid>N/A</pid> <data_percentage>0.06</data_percentage> <lvSize>199.00g</lvSize> </brick> </volume> </snapshot> </snapshots> </snapStatus> </cliOutput> Bug verified on build glusterfs-3.8.4-2.el7rhgs.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html