Bug 1260777 - gstatus: python crash while running gstatus -a
gstatus: python crash while running gstatus -a
Status: CLOSED CURRENTRELEASE
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gstatus (Show other bugs)
3.1
aarch64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Prashant Dhange
storage-qa-internal@redhat.com
: ZStream
Depends On:
Blocks: 1474007
  Show dependency treegraph
 
Reported: 2015-09-07 13:21 EDT by Anil Shah
Modified: 2017-09-26 05:40 EDT (History)
10 users (show)

See Also:
Fixed In Version: gstatus-0.66
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-09-26 05:29:07 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
rnalakka: needinfo-


Attachments (Terms of Use)

  None (edit)
Description Anil Shah 2015-09-07 13:21:42 EDT
Description of problem:

After remove-brick operation, running gstatus -a gave trace back.

Version-Release number of selected component (if applicable):


[root@darkknightrises ~]# rpm -qa | grep glusterfs
glusterfs-libs-3.7.1-14.el7rhgs.x86_64
glusterfs-fuse-3.7.1-14.el7rhgs.x86_64
glusterfs-3.7.1-14.el7rhgs.x86_64
glusterfs-api-3.7.1-14.el7rhgs.x86_64
glusterfs-cli-3.7.1-14.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-14.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-14.el7rhgs.x86_64
glusterfs-server-3.7.1-14.el7rhgs.x86_64

[root@darkknightrises ~]# gstatus --version
gstatus 0.65

How reproducible:

1/1

Steps to Reproduce:
1. Create 2*2 distribute replicate volume
2. Mount volume as fuse/NFS on clinet
3. Create some file and directories
4. add-brick to the volume
5. Start rebalance 
6. remove brick
7. Check gstatus -a 

Actual results:

Python crash with traceback

 [root@rhs-client47 ~]# gstatus -a
 
  Traceback (most recent call last):         
   File "/usr/bin/gstatus", line 221, in <module>
     main()
   File "/usr/bin/gstatus", line 132, in main
     cluster.initialise()
   File "/usr/lib/python2.7/site-packages/gstatus/libgluster/cluster.py", line 95, in initialise
     self.define_volumes()
   File "/usr/lib/python2.7/site-packages/gstatus/libgluster/cluster.py", line 208, in define_volumes
     xml_root = ETree.fromstring(xml_string)
   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1301, in XML
     return parser.close()
   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1654, in close
     self._raiseerror(v)
   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
   raise err
   xml.etree.ElementTree.ParseError: no element found: line 1, column 0

Expected results:

Python should not crash

Additional info:
Comment 3 Mukul Malhotra 2016-10-05 08:00:25 EDT
Another similar Issue has been reported by another customer.

Mukul
Comment 25 Bipin Kunal 2017-07-31 01:46:17 EDT
@Abhishek :

    I see the bug is still in "Assigned" state. This means that upstream patch is not yet merged, although I see status of "https://github.com/gluster/gstatus/pull/4" as merged. Not sure if this need anymore patch.

    Please do reply to comment #23 as well. Delaying it will kick this out of 3.3.1.

@Sac : 
    What is pending on the patch and when can we give a testfix build to customer?

-Bipin
Comment 27 Sachidananda Urs 2017-07-31 03:03:28 EDT
(In reply to Bipin Kunal from comment #25)
> @Abhishek :
> 
>     I see the bug is still in "Assigned" state. This means that upstream
> patch is not yet merged, although I see status of
> "https://github.com/gluster/gstatus/pull/4" as merged. Not sure if this need
> anymore patch.
> 
>     Please do reply to comment #23 as well. Delaying it will kick this out
> of 3.3.1.
> 
> @Sac : 
>     What is pending on the patch and when can we give a testfix build to
> customer?

Bipin,

There are multiple bugs with same issue, this bug is fixed and the hotfix is available and Anil has tested it out. And it is part of another bug.
Comment 34 Sachidananda Urs 2017-09-26 05:29:07 EDT
This bug is resolved as part of fix to BZ#1454544 closing the bug.

Note You need to log in before you can comment on or make changes to this bug.