Bug 1033035 - stop rebalance does not suceed when glusterd is down in one of the node.
stop rebalance does not suceed when glusterd is down in one of the node.
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
Unspecified Unspecified
high Severity high
: ---
: RHGS 2.1.2
Assigned To: anmol babu
: ZStream
Depends On: 1028325 1036564
  Show dependency treegraph
Reported: 2013-11-21 08:05 EST by RamaKasturi
Modified: 2015-05-13 12:26 EDT (History)
7 users (show)

See Also:
Fixed In Version: cb10
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2014-02-25 03:05:16 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
Attaching the screen shot . (202.66 KB, image/png)
2013-12-13 07:18 EST, RamaKasturi
no flags Details

  None (edit)
Description RamaKasturi 2013-11-21 08:05:48 EST
Description of problem:
When glusterd is down in one of the node, clicking on stop rebalance throws an error saying "Error while executing the action stopRebalanceGlusterVolume : Command execution failed."

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Create a distribute volume and start it.
2. start rebalance on the volume.
3. Once rebalance is started, stop glusterd in one of the node by using the command "service glusterd stop".
4. Now click on the drop down button in the activities column and click stop.

Actual results:
It gives an error saying "Error while executing the action stopRebalanceGlusterVolume : Command execution failed."

Expected results:
It should stop rebalance on the volume and once glusterd is brought back rebalance icon should show the status according to CLI.

Additional info:
Comment 2 anmol babu 2013-11-28 04:45:27 EST
er.cifs': 'enable', 'nfs.disable': 'off', 'auth.allow': '*'}}}}
Thread-121344::DEBUG::2013-11-28 20:42:20,716::BindingXMLRPC::984::vds::(wrapper) client []::call volumeRebalanceStop with ('vol_dis',) {} flowID [6991e3f9]
Thread-121344::ERROR::2013-11-28 20:42:20,981::BindingXMLRPC::1000::vds::(wrapper) vdsm exception occured
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 989, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper
    rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 125, in volumeRebalanceStop
    return self.svdsmProxy.glusterVolumeRebalanceStop(volumeName, force)
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
  File "<string>", line 2, in glusterVolumeRebalanceStop
  File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod
    raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
return code: 2

[root@localhost ~]# gluster volume rebalance vol_dis stop --xml
[root@localhost ~]#
[root@localhost ~]# echo $?

As,can be seen,rebalance stop command is not returning any xml and hence this issue.So,it is a gluster bug again.
Comment 3 Prasanth 2013-12-10 00:31:58 EST
Anmol, please update the 'Fixed In Version' of this bug?
Comment 4 RamaKasturi 2013-12-13 07:17:26 EST
Verified and works fine with cb11 build rhsc-2.1.2-0.27.beta.el6_5.noarch.

After making glusterd down in one of the node , and try stopping rebalance , rebalance stops.

Attaching the screenshot for the same.
Comment 5 RamaKasturi 2013-12-13 07:18:02 EST
Created attachment 836285 [details]
Attaching the screen shot .
Comment 7 errata-xmlrpc 2014-02-25 03:05:16 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.