Bug 1033035

Summary: stop rebalance does not suceed when glusterd is down in one of the node.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: RamaKasturi <knarra>
Component: rhscAssignee: anmol babu <anbabu>
Status: CLOSED ERRATA QA Contact: RamaKasturi <knarra>
Severity: high Docs Contact:
Priority: high    
Version: 2.1CC: anbabu, dpati, dtsang, mmahoney, pprakash, rhs-bugs, ssampat
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 2.1.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: cb10 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-02-25 08:05:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1028325, 1036564    
Bug Blocks:    
Attachments:
Description Flags
Attaching the screen shot . none

Description RamaKasturi 2013-11-21 13:05:48 UTC
Description of problem:
When glusterd is down in one of the node, clicking on stop rebalance throws an error saying "Error while executing the action stopRebalanceGlusterVolume : Command execution failed."

Version-Release number of selected component (if applicable):
rhsc-2.1.2-0.24.master.el6_5.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create a distribute volume and start it.
2. start rebalance on the volume.
3. Once rebalance is started, stop glusterd in one of the node by using the command "service glusterd stop".
4. Now click on the drop down button in the activities column and click stop.

Actual results:
It gives an error saying "Error while executing the action stopRebalanceGlusterVolume : Command execution failed."

Expected results:
It should stop rebalance on the volume and once glusterd is brought back rebalance icon should show the status according to CLI.

Additional info:

Comment 2 anmol babu 2013-11-28 09:45:27 UTC
er.cifs': 'enable', 'nfs.disable': 'off', 'auth.allow': '*'}}}}
Thread-121344::DEBUG::2013-11-28 20:42:20,716::BindingXMLRPC::984::vds::(wrapper) client [10.70.37.129]::call volumeRebalanceStop with ('vol_dis',) {} flowID [6991e3f9]
Thread-121344::ERROR::2013-11-28 20:42:20,981::BindingXMLRPC::1000::vds::(wrapper) vdsm exception occured
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 989, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper
    rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 125, in volumeRebalanceStop
    return self.svdsmProxy.glusterVolumeRebalanceStop(volumeName, force)
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
    **kwargs)
  File "<string>", line 2, in glusterVolumeRebalanceStop
  File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod
    raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
return code: 2


[root@localhost ~]# gluster volume rebalance vol_dis stop --xml
[root@localhost ~]#
[root@localhost ~]# echo $?
2

As,can be seen,rebalance stop command is not returning any xml and hence this issue.So,it is a gluster bug again.

Comment 3 Prasanth 2013-12-10 05:31:58 UTC
Anmol, please update the 'Fixed In Version' of this bug?

Comment 4 RamaKasturi 2013-12-13 12:17:26 UTC
Verified and works fine with cb11 build rhsc-2.1.2-0.27.beta.el6_5.noarch.

After making glusterd down in one of the node , and try stopping rebalance , rebalance stops.

Attaching the screenshot for the same.

Comment 5 RamaKasturi 2013-12-13 12:18:02 UTC
Created attachment 836285 [details]
Attaching the screen shot .

Comment 7 errata-xmlrpc 2014-02-25 08:05:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html