Bug 1020205 - [RHSC] After the completion of a remove-brick operation, cannot start remove-brick again.
Summary: [RHSC] After the completion of a remove-brick operation, cannot start remove-...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: 2.1
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: RHGS 2.1.2
Assignee: Kanagaraj
QA Contact: Shruti Sampat
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-17 09:36 UTC by Shruti Sampat
Modified: 2016-04-18 10:06 UTC (History)
9 users (show)

Fixed In Version: CB6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-02-25 07:53:53 UTC
Embargoed:


Attachments (Terms of Use)
engine logs (16.44 MB, text/x-log)
2013-10-17 09:39 UTC, Shruti Sampat
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:0208 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #2 2014-02-25 12:20:30 UTC
oVirt gerrit 20287 0 None None None Never

Description Shruti Sampat 2013-10-17 09:36:50 UTC
Description of problem:
-----------------------
After the successful completion of a remove-brick operation on a volume, when attempting to start another remove-brick operation, the following error is seen -

Error while executing action: Cannot start removing Gluster Volume. A task is in progress on the volume dis_vol in cluster test.

Actually, there were no running tasks on the volume. See below - 

[root@rhs ~]# gluster v status dis_vol
Status of volume: dis_vol
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.205:/rhs/brick1/dis_vol/b2               49152   Y       15511
Brick 10.70.37.44:/rhs/brick1/dis_vol/b3                49153   Y       1256
Brick 10.70.37.66:/rhs/brick1/dis_vol/b4                49153   Y       32013
NFS Server on localhost                                 2049    Y       13015
NFS Server on 10.70.37.158                              2049    Y       27561
NFS Server on 10.70.37.66                               2049    Y       11850
NFS Server on 10.70.37.205                              2049    Y       27737
 
Task Status of Volume dis_vol
------------------------------------------------------------------------------
There are no active volume tasks

Starting rebalance from the Console also resulted in a similar error, but starting rebalance from the gluster CLI succeeded.

Version-Release number of selected component (if applicable):
Red Hat Storage Console Version: 2.1.2-0.0.scratch.beta1.el6_4 

How reproducible:
Observed it once.

Steps to Reproduce:
1. Create a volume and start it, mount it, create data at the mount point.
2. Start remove-brick operation for one brick on the above volume.
3. After data migration is over, commit the remove brick operation.
4. Try to start another remove brick on the same volume now.

Actual results:
Remove brick does not start with the above mentioned error. Rebalance also fails to start on the volume with a similar error. But starting rebalance from the gluster CLI succeeds. This means, there were no running gluster tasks.

Expected results:
Remove brick should have started from the UI again, because there were no running tasks.

Additional info:

Comment 1 Shruti Sampat 2013-10-17 09:39:33 UTC
Created attachment 813251 [details]
engine logs

Comment 3 Kanagaraj 2013-10-18 10:51:24 UTC
Now the 'Commit' remove brick operation will remove the bricks and the taskId in Volume. 

The remove-brick icon in the volumes table, will disappear if commit is successful.

Comment 4 Shruti Sampat 2013-11-06 17:54:27 UTC
Verified as fixed in Red Hat Storage Console Version: 2.1.2-0.22.master.el6_4.

Comment 6 errata-xmlrpc 2014-02-25 07:53:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html


Note You need to log in before you can comment on or make changes to this bug.