Description of problem: 'gluster volume status <vol> --xml' should provide the UUID of the host as well when providing the information about different services. Version-Release number of selected component (if applicable): glusterfs 3.4.0alpha2 built on Mar 6 2013 23:54:05 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: UUID of the hosts should be provided in the output of 'gluster volume status <vol> --xml' command Additional info:
Requirement is that nfs/shd services need to have uuid along with hostname.
REVIEW: http://review.gluster.org/6162 (cli: add <uuid> tag to volume status xml output) posted (#1) for review on master by Bala FA (barumuga)
REVIEW: http://review.gluster.org/6162 (cli: add peerid to volume status xml output) posted (#2) for review on master by Bala FA (barumuga)
COMMIT: http://review.gluster.org/6267 committed in release-3.4 by Anand Avati (avati) ------ commit 25dadcf6725b834bf735224ba165330b8872af4f Author: Bala.FA <barumuga> Date: Tue Oct 29 17:17:12 2013 +0530 cli: add peerid to volume status xml output This patch adds <peerid> tag to bricks and nfs/shd like services to volume status xml output. BUG: 955548 Change-Id: I0e58e323534a19d485c9523466bce215bd466160 Signed-off-by: Bala.FA <barumuga> Reviewed-on: http://review.gluster.org/6267 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Anand Avati <avati>
This change breaks compatibility with previous releases that don't provide a peerid. This is particularly true while upgrading from a previous release. We just upgraded from 3.4.0 to 3.4.2 and could not get our monitoring to function because the 3.4.2 node requires the peerid value to produce the xml output, which the 3.4.0 node did not provide. While upgrading the other node to 3.4.2 fixed the problem, it would be nice if the two could coexist with one another. The failure mode wasn't great either, the 'gluster volume status --xml' command would simply exit with status code 2 and produce no output. Todd
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.4.3, please reopen this bug report. glusterfs-3.4.3 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should already be or become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. The fix for this bug likely to be included in all future GlusterFS releases i.e. release > 3.4.3. In the same line the recent release i.e. glusterfs-3.5.0 [3] likely to have the fix. You can verify this by reading the comments in this bug report and checking for comments mentioning "committed in release-3.5". [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/5978 [2] http://news.gmane.org/gmane.comp.file-systems.gluster.user [3] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137