+++ This bug was initially created as a clone of Bug #845504 +++ gluster volume create --xml prints wrong xml structure <cliOutput> <volCreate> <count>1</count> <bricks> 192.168.122.2:/tmp/test2-b1 </bricks> <transport>tcp</transport> <type>0</type> <volname>test2</volname> </volCreate> </cliOutput> It needs to output something like <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volCreate> <volume> <name>music</name> <id>b3114c71-741b-4c6f-a39e-80384c4ea3cf</id> <type>2</type> <status>1</status> <brickCount>2</brickCount> <distCount>2</distCount> <stripeCount>1</stripeCount> <replicaCount>2</replicaCount> <transport>0</transport> <bricks> <brick>192.168.122.2:/tmp/music-b1</brick> <brick>192.168.122.2:/tmp/music-b2</brick> </bricks> </volume> </volCreate> </cliOutput> If not all volume info is possible, atleast <name> and <id> needs to be present
Patch for upstream/master under review at http://review.gluster.org/3869
Fixed by commit f1f3d1c (cli: Changes and enhancements to XML output) for bug https://bugzilla.redhat.com/show_bug.cgi?id=828131 . Reviewed at http://review.gluster.org/3869 .
Not yet fixed in version glusterfs 3.3.0rhsvirt1-8.el6rhs, Moving to ASSIGNED
fixed only in upstream... available to testing with 3.4.0qa2 release.
verified on glusterfs 3.4.0qa5 name and id are present and coming properly. not working as per excepted xml mentioned above so changing status actual output:- # gluster volume create test replica 2 10.16.159.138:/home/xml4 10.16.159.128:/home/xml4 --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><opRet>0</opRet><opErrno>115</opErrno><opErrstr></opErrstr><volCreate><volume><name>test</name><id>fc58bdfa-d7f4-41bf-b829-5bf8f35713fb</id></volume></volCreate></cliOutput> expected output:- below xml data is not structured properly:- <opErrstr></opErrstr>, it should be <opErrstr/>
Do you know which version of libxml2 is being used? Empty elements correctly appear as a single tag on my system.
This is not related to the version libxml2, the empty elements were being displayed as single tag by xmllint.
The fix for this under review here http://review.gluster.org/4355
CHANGE: http://review.gluster.org/4355 (cli: output xml in pretty format) merged in master by Anand Avati (avati)
verified with 3.4.0.2rhs-1.el6rhs.x86_64 1. In case of success, need info:- In case of success below behaviour is expected? a) only name and ID is coming under <volume> tag. b) <opErrno> is always 115 e.g. [root@mia ~]# gluster volume create 22 fred.lab.eng.blr.redhat.com:/rhs/brick1/23 mia.lab.eng.blr.redhat.com:/rhs/brick1/23 fan.lab.eng.blr.redhat.com:/rhs/brick1/23 --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>115</opErrno> <opErrstr/> <volCreate> <volume> <name>22</name> <id>fc487e35-179f-49c3-91c5-efdce5525dea</id> </volume> </volCreate> </cliOutput> 2. In case of failure sometimes <opErrstr/> is blank, hence moving it to assigned status e.g. <opErrstr/> is blank [root@cutlass ~]# gluster volume create 23 cutlass.lab.eng.blr.redhat.com:/rhs/brick1/2s mia.lab.eng.blr.redhat.com:/rhs/brick1/222 fan.lab.eng.blr.redhat.com:/rhs/brick1/222 --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>-1</opRet> <opErrno>115</opErrno> <opErrstr/> </cliOutput> <opErrstr/> having message [root@mia ~]# gluster volume create 23 cutlass.lab.eng.blr.redhat.com:/rhs/brick1/2s mia.lab.eng.blr.redhat.com:/rhs/brick1/222 fan.lab.eng.blr.redhat.com:/rhs/brick1/222 --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>-1</opRet> <opErrno>115</opErrno> <opErrstr>/rhs/brick1/222 is already part of a volume</opErrstr> </cliOutput>
3. need info :- In case of failure tag < <volCreate> and all tags which comes under it is not displayed. Is it expected behaviour?
Regarding the tags not being shown during error, that is expected. And I believe that's good enough for RHSC team, which is the primary user for xml outputs. Anyway, tagging this as a needinfo from Bala. Bala, Can you give your opinions? Other than that, I'll take a look into the other errors. Reducing priority, as I this isn't something serious. - Kaushal
Current xml output resolves this issue. I would prefer this bz to be closed safely.
verified with 3.4.0.52rhs-1.el6rhs.x86_64 Now xml output shows error string in case of failure. But unable to understand <opErrno> tag. In case of failure it is always zero and on success it has different number each time. e.g. </cliOutput> [root@7-VM1 ~]# gluster volume create xml12 10.70.36.130:/rhs/brick1/x2 10.70.36.132:/rhs/brick1/x2 10.70.36.133:/rhs/brick1/x2 10.70.36.133:/rhs/brick2/x2 --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>2</opErrno> <opErrstr/> <volCreate> <volume> <name>xml12</name> <id>1a0f073a-d508-4899-9b90-90ef7d6b2ebc</id> </volume> </volCreate> </cliOutput> [root@7-VM1 ~]# gluster volume create xml1 10.70.36.130:/rhs/brick1/x1 10.70.36.132:/rhs/brick1/x1 10.70.36.133:/rhs/brick1/x1 10.70.36.133:/rhs/brick2/x1 --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>61</opErrno> <opErrstr/> <volCreate> <volume> <name>xml1</name> <id>fa685aab-02c9-4c73-9ffc-275bd1df0db7</id> </volume> </volCreate> </cliOutput> Could you please explain use of this tag and behaviour mentioned above is expected or not? What is expected value for that tag in case of failure and success.
The important tag is opRet. When opRet is 0, the other two are basically unimportant. When opRet is non-zero, opErrno and opErrstr have some meaning. The opErrno value is possibly an errno returned by some failed call internally which was propagated through the call graph. When opRet is 0, it opErrno should have been reset to 0, but it's not being done now (this is a good candidate for an RFE). When opRet is 0, the user should not consider the the value opErrno.
As mentioned in Comment17, moving this bug to verified and opening new bug/RFE for issue mentioned in comment#16
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html