Bug 1260777

Summary: gstatus: python crash while running gstatus -a
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Anil Shah <ashah>
Component: gstatusAssignee: Prashant Dhange <pdhange>
Status: CLOSED CURRENTRELEASE QA Contact: storage-qa-internal <storage-qa-internal>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: abhishku, asrivast, bkunal, ccalhoun, mmalhotr, pdhange, pousley, rhinduja, rnalakka, surs
Target Milestone: ---Keywords: ZStream
Target Release: ---Flags: rnalakka: needinfo-
Hardware: aarch64   
OS: Linux   
Whiteboard:
Fixed In Version: gstatus-0.66 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-09-26 09:29:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1474007    

Description Anil Shah 2015-09-07 17:21:42 UTC
Description of problem:

After remove-brick operation, running gstatus -a gave trace back.

Version-Release number of selected component (if applicable):


[root@darkknightrises ~]# rpm -qa | grep glusterfs
glusterfs-libs-3.7.1-14.el7rhgs.x86_64
glusterfs-fuse-3.7.1-14.el7rhgs.x86_64
glusterfs-3.7.1-14.el7rhgs.x86_64
glusterfs-api-3.7.1-14.el7rhgs.x86_64
glusterfs-cli-3.7.1-14.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-14.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-14.el7rhgs.x86_64
glusterfs-server-3.7.1-14.el7rhgs.x86_64

[root@darkknightrises ~]# gstatus --version
gstatus 0.65

How reproducible:

1/1

Steps to Reproduce:
1. Create 2*2 distribute replicate volume
2. Mount volume as fuse/NFS on clinet
3. Create some file and directories
4. add-brick to the volume
5. Start rebalance 
6. remove brick
7. Check gstatus -a 

Actual results:

Python crash with traceback

 [root@rhs-client47 ~]# gstatus -a
 
  Traceback (most recent call last):         
   File "/usr/bin/gstatus", line 221, in <module>
     main()
   File "/usr/bin/gstatus", line 132, in main
     cluster.initialise()
   File "/usr/lib/python2.7/site-packages/gstatus/libgluster/cluster.py", line 95, in initialise
     self.define_volumes()
   File "/usr/lib/python2.7/site-packages/gstatus/libgluster/cluster.py", line 208, in define_volumes
     xml_root = ETree.fromstring(xml_string)
   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1301, in XML
     return parser.close()
   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1654, in close
     self._raiseerror(v)
   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
   raise err
   xml.etree.ElementTree.ParseError: no element found: line 1, column 0

Expected results:

Python should not crash

Additional info:

Comment 3 Mukul Malhotra 2016-10-05 12:00:25 UTC
Another similar Issue has been reported by another customer.

Mukul

Comment 25 Bipin Kunal 2017-07-31 05:46:17 UTC
@Abhishek :

    I see the bug is still in "Assigned" state. This means that upstream patch is not yet merged, although I see status of "https://github.com/gluster/gstatus/pull/4" as merged. Not sure if this need anymore patch.

    Please do reply to comment #23 as well. Delaying it will kick this out of 3.3.1.

@Sac : 
    What is pending on the patch and when can we give a testfix build to customer?

-Bipin

Comment 27 Sachidananda Urs 2017-07-31 07:03:28 UTC
(In reply to Bipin Kunal from comment #25)
> @Abhishek :
> 
>     I see the bug is still in "Assigned" state. This means that upstream
> patch is not yet merged, although I see status of
> "https://github.com/gluster/gstatus/pull/4" as merged. Not sure if this need
> anymore patch.
> 
>     Please do reply to comment #23 as well. Delaying it will kick this out
> of 3.3.1.
> 
> @Sac : 
>     What is pending on the patch and when can we give a testfix build to
> customer?

Bipin,

There are multiple bugs with same issue, this bug is fixed and the hotfix is available and Anil has tested it out. And it is part of another bug.

Comment 34 Sachidananda Urs 2017-09-26 09:29:07 UTC
This bug is resolved as part of fix to BZ#1454544 closing the bug.