Description of problem: remove brick fails to start for the volume on which quota limit set is full. Version-Release number of selected component (if applicable): [root@nfs1 ~]# rpm -qa | grep glusterfs glusterfs-3.4.0.12rhs.beta4-1.el6rhs.x86_64 glusterfs-fuse-3.4.0.12rhs.beta4-1.el6rhs.x86_64 glusterfs-server-3.4.0.12rhs.beta4-1.el6rhs.x86_64 How reproducible: using quota and brick operations for this build first time. Steps to Reproduce: 1. create a volume , start it 2. enable quota 3. set limit of 1GB on "/" 4. mount the volume over nfs 5. create two dirs dir1 and dir2. 6. create data inside them 7. after the limit-set is filled. [root@nfs1 ~]# gluster volume quota quota-dist-rep list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 1GB 90% 1.0GB 0Bytes start remove-brick, Actual results: [root@nfs1 ~]# gluster volume remove-brick quota-dist-rep 10.70.37.180:/rhs/bricks/quota-d1r1-add 10.70.37.80:/rhs/bricks/quota-d1r2-add start volume remove-brick start: failed: A remove-brick task on volume quota-dist-rep is not yet committed. Either commit or stop the remove-brick task. [root@nfs1 ~]# gluster volume remove-brick quota-dist-rep 10.70.37.180:/rhs/bricks/quota-d1r1-add 10.70.37.80:/rhs/bricks/quota-d1r2-add status Node Rebalanced-files size scanned failures status run-time in secs --------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 not started 0.00 10.70.37.216 0 0Bytes 1028 0 completed 4.00 10.70.37.80 0 0Bytes 1028 0 completed 3.00 10.70.37.139 0 0Bytes 1028 0 completed 4.00 Expected results: remove brick should succeed irrespective of quota Additional info: just before this I filed the BZ 985783
This is not in anyway related to quota. There was a problem with starting remove-brick around the time the packages used were built. This caused all remove-brick start commands to fail. There was a bug filed for this (https://bugzilla.redhat.com/show_bug.cgi?id=982184) which was fixed, and the fix has been available since glusterfs-3.4.0.12rhs.beta6-1. The original bug has been verified, and has been closed with ERRATA. Moving this to ON_QA for verification, with fixed in version as glusterfs-3.4.0.33rhs.
Verified on 3.4.0.34rhs-1.el6rhs.x86_64
Shanks, Since this bug was a side effect of https://bugzilla.redhat.com/show_bug.cgi?id=982184, I am not adding errata doc text to this bug. Does that make sense?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1769.html