Bug 1374290
| Summary: | "gluster vol status all clients --xml" doesn't generate xml if there is a failure in between | |||
|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Atin Mukherjee <amukherj> | |
| Component: | cli | Assignee: | Atin Mukherjee <amukherj> | |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 3.8 | CC: | amukherj, bugs, rhs-bugs, rnalakka, sbairagy, storage-qa-internal | |
| Target Milestone: | --- | Keywords: | Triaged | |
| Target Release: | --- | |||
| Hardware: | All | |||
| OS: | All | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.8.4 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | 1372553 | |||
| : | 1374298 (view as bug list) | Environment: | ||
| Last Closed: | 2016-09-16 18:28:44 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1372553 | |||
| Bug Blocks: | 1369049 | |||
|
Comment 1
Worker Ant
2016-09-08 12:07:55 UTC
REVIEW: http://review.gluster.org/15428 (cli: fix volume status xml generation) posted (#2) for review on release-3.8 by Atin Mukherjee (amukherj) REVIEW: http://review.gluster.org/15428 (cli: fix volume status xml generation) posted (#3) for review on release-3.8 by Atin Mukherjee (amukherj) Description of problem:
Sometimes the gstatus command below traceback instead of proper output. The issue is because the glusterd is giving malformed xml outputs to gstatus scripts.
# gstatus
Traceback (most recent call last):ons
File "/usr/bin/gstatus", line 221, in <module>
main()
File "/usr/bin/gstatus", line 135, in main
cluster.update_state(self_heal_backlog)
File "/usr/lib/python2.7/site-packages/gstatus/libgluster/cluster.py", line 638, in update_state
self.calc_connections()
File "/usr/lib/python2.7/site-packages/gstatus/libgluster/cluster.py", line 713, in calc_connections
cmd.run()
File "/usr/lib/python2.7/site-packages/gstatus/libcommand/glustercmd.py", line 100, in run
xmldoc = ETree.fromstring(''.join(self.stdout))
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1301, in XML
return parser.close()
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1654, in close
self._raiseerror(v)
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
xml.etree.ElementTree.ParseError: no element found: line 1, column 0
Version-Release number of selected component (if applicable):
mainline
How reproducible:
Not always reproducible.
Steps to Reproduce:
1. Install glusterfs
2. Install gstatus
3. Run gstatus
Actual results:
gstatus gives traceback at times instead of cluster health status.
Expected results:
cli should provide the xml output
COMMIT: http://review.gluster.org/15428 committed in release-3.8 by Niels de Vos (ndevos) ------ commit cb15b3be846d6ff0be450b245aba17ba67457b1e Author: Atin Mukherjee <amukherj> Date: Fri Sep 2 10:42:44 2016 +0530 cli: fix volume status xml generation While generating xml, if CLI fails in between xml output doesn't get dumped into stdout. Fix is to invoke cli_xml_output_vol_status_end () in such failures. >Reviewed-on: http://review.gluster.org/15384 >NetBSD-regression: NetBSD Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.org> >Smoke: Gluster Build System <jenkins.org> >Reviewed-by: Samikshan Bairagya <samikshan> >Reviewed-by: Prashanth Pai <ppai> Change-Id: I7cb3097f5ae23092e6d20f68bd75aa190c31ed88 BUG: 1374290 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/15428 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Samikshan Bairagya <samikshan> Reviewed-by: Prashanth Pai <ppai> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.4, please open a new bug report. glusterfs-3.8.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/announce/2016-September/000060.html [2] https://www.gluster.org/pipermail/gluster-users/ |