Bug 1033035 - stop rebalance does not suceed when glusterd is down in one of the node.
Summary: stop rebalance does not suceed when glusterd is down in one of the node.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: 2.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 2.1.2
Assignee: anmol babu
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On: 1028325 1036564
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-21 13:05 UTC by RamaKasturi
Modified: 2015-05-13 16:26 UTC (History)
7 users (show)

Fixed In Version: cb10
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-02-25 08:05:16 UTC
Embargoed:


Attachments (Terms of Use)
Attaching the screen shot . (202.66 KB, image/png)
2013-12-13 12:18 UTC, RamaKasturi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:0208 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #2 2014-02-25 12:20:30 UTC

Description RamaKasturi 2013-11-21 13:05:48 UTC
Description of problem:
When glusterd is down in one of the node, clicking on stop rebalance throws an error saying "Error while executing the action stopRebalanceGlusterVolume : Command execution failed."

Version-Release number of selected component (if applicable):
rhsc-2.1.2-0.24.master.el6_5.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create a distribute volume and start it.
2. start rebalance on the volume.
3. Once rebalance is started, stop glusterd in one of the node by using the command "service glusterd stop".
4. Now click on the drop down button in the activities column and click stop.

Actual results:
It gives an error saying "Error while executing the action stopRebalanceGlusterVolume : Command execution failed."

Expected results:
It should stop rebalance on the volume and once glusterd is brought back rebalance icon should show the status according to CLI.

Additional info:

Comment 2 anmol babu 2013-11-28 09:45:27 UTC
er.cifs': 'enable', 'nfs.disable': 'off', 'auth.allow': '*'}}}}
Thread-121344::DEBUG::2013-11-28 20:42:20,716::BindingXMLRPC::984::vds::(wrapper) client [10.70.37.129]::call volumeRebalanceStop with ('vol_dis',) {} flowID [6991e3f9]
Thread-121344::ERROR::2013-11-28 20:42:20,981::BindingXMLRPC::1000::vds::(wrapper) vdsm exception occured
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 989, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper
    rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 125, in volumeRebalanceStop
    return self.svdsmProxy.glusterVolumeRebalanceStop(volumeName, force)
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
    **kwargs)
  File "<string>", line 2, in glusterVolumeRebalanceStop
  File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod
    raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
return code: 2


[root@localhost ~]# gluster volume rebalance vol_dis stop --xml
[root@localhost ~]#
[root@localhost ~]# echo $?
2

As,can be seen,rebalance stop command is not returning any xml and hence this issue.So,it is a gluster bug again.

Comment 3 Prasanth 2013-12-10 05:31:58 UTC
Anmol, please update the 'Fixed In Version' of this bug?

Comment 4 RamaKasturi 2013-12-13 12:17:26 UTC
Verified and works fine with cb11 build rhsc-2.1.2-0.27.beta.el6_5.noarch.

After making glusterd down in one of the node , and try stopping rebalance , rebalance stops.

Attaching the screenshot for the same.

Comment 5 RamaKasturi 2013-12-13 12:18:02 UTC
Created attachment 836285 [details]
Attaching the screen shot .

Comment 7 errata-xmlrpc 2014-02-25 08:05:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html


Note You need to log in before you can comment on or make changes to this bug.