Bug 955548 - adding host uuids to volume status command xml output
Summary: adding host uuids to volume status command xml output
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: cli
Version: 3.4.0-alpha
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kaushal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1024228
TreeView+ depends on / blocked
 
Reported: 2013-04-23 09:23 UTC by Kanagaraj
Modified: 2016-04-18 10:06 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.4.3
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1024228 (view as bug list)
Environment:
Last Closed: 2014-04-17 13:12:37 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kanagaraj 2013-04-23 09:23:27 UTC
Description of problem:

'gluster volume status <vol> --xml' should provide the UUID of the host as well when providing the information about different services.

Version-Release number of selected component (if applicable):

glusterfs 3.4.0alpha2 built on Mar  6 2013 23:54:05

How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:

UUID of the hosts should be provided in the output of 'gluster volume status <vol> --xml' command

Additional info:

Comment 1 Bala.FA 2013-10-29 09:25:47 UTC
Requirement is that nfs/shd services need to have uuid along with hostname.

Comment 2 Anand Avati 2013-10-29 11:49:41 UTC
REVIEW: http://review.gluster.org/6162 (cli: add <uuid> tag to volume status xml output) posted (#1) for review on master by Bala FA (barumuga)

Comment 3 Anand Avati 2013-11-14 11:54:55 UTC
REVIEW: http://review.gluster.org/6162 (cli: add peerid to volume status xml output) posted (#2) for review on master by Bala FA (barumuga)

Comment 4 Anand Avati 2013-11-26 19:53:20 UTC
COMMIT: http://review.gluster.org/6267 committed in release-3.4 by Anand Avati (avati) 
------
commit 25dadcf6725b834bf735224ba165330b8872af4f
Author: Bala.FA <barumuga>
Date:   Tue Oct 29 17:17:12 2013 +0530

    cli: add peerid to volume status xml output
    
    This patch adds <peerid> tag to bricks and nfs/shd like services to
    volume status xml output.
    
    BUG: 955548
    Change-Id: I0e58e323534a19d485c9523466bce215bd466160
    Signed-off-by: Bala.FA <barumuga>
    Reviewed-on: http://review.gluster.org/6267
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Anand Avati <avati>

Comment 5 Todd Stansell 2014-03-12 05:19:45 UTC
This change breaks compatibility with previous releases that don't provide a peerid.  This is particularly true while upgrading from a previous release.  We just upgraded from 3.4.0 to 3.4.2 and could not get our monitoring to function because the 3.4.2 node requires the peerid value to produce the xml output, which the 3.4.0 node did not provide.

While upgrading the other node to 3.4.2 fixed the problem, it would be nice if the two could coexist with one another.  The failure mode wasn't great either, the 'gluster volume status --xml' command would simply exit with status code 2 and produce no output.

Todd

Comment 6 Niels de Vos 2014-04-17 13:12:37 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.4.3, please reopen this bug report.

glusterfs-3.4.3 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should already be or become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

The fix for this bug likely to be included in all future GlusterFS releases i.e. release > 3.4.3. In the same line the recent release i.e. glusterfs-3.5.0 [3] likely to have the fix. You can verify this by reading the comments in this bug report and checking for comments mentioning "committed in release-3.5".

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/5978
[2] http://news.gmane.org/gmane.comp.file-systems.gluster.user
[3] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137


Note You need to log in before you can comment on or make changes to this bug.