Bug 1027675 - [RHSC] Remove-brick status dialog hangs when glusterd goes down on the storage node
[RHSC] Remove-brick status dialog hangs when glusterd goes down on the storag...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
2.1
Unspecified Unspecified
high Severity high
: ---
: RHGS 2.1.2
Assigned To: anmol babu
Shruti Sampat
: ZStream
Depends On: 1028325 1036564
Blocks:
  Show dependency treegraph
 
Reported: 2013-11-07 04:32 EST by Shruti Sampat
Modified: 2015-05-13 12:28 EDT (History)
8 users (show)

See Also:
Fixed In Version: cb10
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-02-25 03:01:40 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
engine logs (3.73 MB, text/x-log)
2013-11-07 04:34 EST, Shruti Sampat
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 21086 None None None Never

  None (edit)
Description Shruti Sampat 2013-11-07 04:32:31 EST
Description of problem:
-----------------------

On a cluster with a single node, when remove-brick is in progress, and glusterd is brought down on the storage node, the remove-brick status dialog hangs. The engine logs show an exception that says there is no UP server in the cluster.

Version-Release number of selected component (if applicable):
Red Hat Storage Console Version: 2.1.2-0.22.master.el6_4 

How reproducible:
Always

Steps to Reproduce:
1. Create a cluster with a single node and create a volume, start it, mount it and create some data on the mount point.
2. Start remove-brick on the volume.
3. Kill glusterd on the storage node.
4. Click on status in the remove-brick drop-down menu in the Activities column.

Actual results:
Remove-brick status dialog hangs.

Expected results:
If there is no UP server in the cluster, on clicking the remove-brick status, the dialog should give an appropriate message that status cannot be fetched as there are no UP servers. The dialog should not hang.

Additional info:
Comment 1 Shruti Sampat 2013-11-07 04:34:44 EST
Created attachment 820952 [details]
engine logs
Comment 3 Shruti Sampat 2013-11-26 08:03:38 EST
Remove-brick status dialog was seen to hang on following the steps listed below - 

1. On a cluster of 4 nodes, kill glusterd on one of the servers ( say server1 ), and check status, the dialog does not hang and renders correctly.

2. Kill glusterd on another server ( say server2 ), and bring it up on server1.
Check the status dialog, it hangs sometimes. At other times, it shows a message that data could not be fetched for the remove brick operation.

Re-assigning the bug.
Comment 4 anmol babu 2013-11-26 09:22:17 EST
We had seen a similar issue some time back for Bz.1015394.

So, if you can reproduce this issue can you please execute the remove brick status command with "--xml" in Gluster CLI and check if it displays any output. Bcoz,we had seen this issue of no xml output from gluster CLI for rebalance status(Bz 1015394 which is blocked by Bz 1028325).
Comment 5 Shruti Sampat 2013-11-27 02:26:52 EST
Saw that the remove-brick status xml returns null, even though remove-brick status command returns the status. So this is also dependent on BZ #1028325.
Comment 6 Shruti Sampat 2013-12-13 07:37:32 EST
Verified as fixed in Red Hat Storage Console Version: 2.1.2-0.27.beta.el6_5 and glusterfs 3.4.0.49rhs. Status dialog no longer hanging, proper status displayed.
Comment 8 errata-xmlrpc 2014-02-25 03:01:40 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Note You need to log in before you can comment on or make changes to this bug.