Created attachment 855269 [details] gluster volume status --xml <volname> output Description of problem: 'gluster volume status --xml' has unusual nested <node> elements in output. I've attached the output from the command with and without the --xml flag. This format wasn't present in earlier versions of gluster, and so I'm not sure if it was introduced intentionally or as a mistake. If it was intentional, it's not entirely clear to me what the nesting represents. Version-Release number of selected component (if applicable): This occurs in 3.4.2, it probably didn't occur in earlier versions, although I'm not sure how early, but I think 3.3.x it worked okay. I noticed this bug because Puppet-Gluster detected a problem with the output where I didn't remember having one before. How reproducible: 100% Steps to Reproduce: 1. Build a gluster setup. 2. Run gluster volume status --xml 3. Observe output Actual results: Weird nesting. Expected results: No weird nesting. Additional info: The weird nesting only affects the 'NFS Server' type elements. The normal node (host) elements are correct and haven't changed.
Created attachment 855270 [details] Text equivalent of the command without --xml
The workaround for Puppet-Gluster is: https://github.com/purpleidea/puppet-gluster/blame/master/files/xml.py#L227 The commit is: https://github.com/purpleidea/puppet-gluster/commit/0191594e5390a70503b0dce177d2998a0a0f434a If anyone is interested.
I believe this is the same as 1046020. The weird nesting used to occur when one of the peers was down. Can you confirm if this is the case in your environment as well? 1046020 has been fixed in master btw.
(In reply to Kaushal from comment #3) > I believe this is the same as 1046020. The weird nesting used to occur when > one of the peers was down. Can you confirm if this is the case in your > environment as well? 1046020 has been fixed in master btw. In my environment all the peers were up. This is shown in the attachments too. I tested this on 3.4.2, I don't know if this occurs in 3.5.x, but I'll look for the issue when 3.5.0beta2 releases.
This issue is fixed through BZ 1046020