Description of problem: I create a disperse 3 redundancy 1 volume, start, then enable quota. I down one brick, copy some files and directories to mountpooint, then restart thart brick. exec find /cluster2/test/ -d -exec getfattr -h -n trusted.ec.heal {} \; (I have mager this patch) All REG files is OK, trusted.ec.heal="Good: 111, Bad: 000". but All DIR is failed, trusted.ec.heal="Good: 110, Bad: 000" exec find /cluster2/test/ -d -exec getfattr -h -n trusted.ec.heal {} \; again, DIRs also can not be healed. Then I disable quota, exec find /cluster2/test/ -d -exec getfattr -h -n trusted.ec.heal {} \;, All DIRs are OK. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1.create a disperse 3 redundancy 1 volume 2.enable quota default. 3.down one brick. 4.copy some files and directories to mountpoint. 5.restart that brick. 6.find /cluster2/test/ -d -exec getfattr -h -n trusted.ec.heal {} \; 7.All REGs are OK, but All DIRs are failed. Actual results: Heal DIR is failed when quota is enbale. Expected results: Heal DIR is OK when quota is enbale. Additional info:
Bricks error log: [2014-12-26 10:51:40.116480] E [quota.c:3303:quota_setxattr] 0-test-quota: attempt to set internal xattr: trusted.digioceanfs.quota*: Operation not permitted
jiademing, I can't find the patch you mentioned. Could you please post it to review.gluster.org or (less optimally) attach it here? Thanks. This might relate to (or interact with) http://review.gluster.org/#/c/9385/
REVIEW: http://review.gluster.org/9401 (cluster/ec: Do not modify quota, selinux xattrs in healing) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
Patch above fixes the issue reported in the 'steps'. There is still more to the bug where quota-limit xattr is not allowed to be healed unless the pid of the process doing it is -ve i.e. internal process. I will be sending that patch once I discuss the solution with Xavi.
(In reply to Jeff Darcy from comment #2) > jiademing, I can't find the patch you mentioned. Could you please post it > to review.gluster.org or (less optimally) attach it here? Thanks. > > This might relate to (or interact with) http://review.gluster.org/#/c/9385/ I meger this patch http://review.gluster.org/#/c/9072/, so I can use "find /cluster2/test/ -d -exec getfattr -h -n trusted.ec.heal {} \;" to heal the files and dirs. I view that patch http://review.gluster.org/#/c/9385/, it reject user to modify the xattrs of ec, but can not reslove above problem. ec want to heal xattrs of DIRS, but the quota translater doesn't allow ec to modify some xattr (like trusted.glusterfs.quota.dirty), so lead to heal xattrs failed. Bricks error log: [2014-12-26 10:51:40.116480] E [quota.c:3303:quota_setxattr] 0-test-quota: attempt to set internal xattr: trusted.glusterfs.quota*: Operation not permitted
REVIEW: http://review.gluster.org/9454 (cluster/ec: Do not modify quota, selinux xattrs in healing) posted (#1) for review on release-3.6 by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/9454 committed in release-3.6 by Raghavendra Bhat (raghavendra) ------ commit 8fd0a88eed39e1f70f0057efb2f92564fb135186 Author: Pranith Kumar K <pkarampu> Date: Wed Jan 7 12:08:48 2015 +0530 cluster/ec: Do not modify quota, selinux xattrs in healing Backport of http://review.gluster.org/9401 Problem: EC heal tries to heal quota-size, selinux xattrs as well. quota-size is private to the brick but since quotad accesses them using the standard interface as well, they can not be filtered in the fops. Fix: Ignore QUOTA_SIZE_KEY and SELINUX xattrs during heal. BUG: 1178590 Change-Id: Id569a49ef996e5507f4474c99b6cdc22781ad82d Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/9454 Reviewed-by: Xavier Hernandez <xhernandez> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Raghavendra Bhat <raghavendra>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v3.6.3, please open a new bug report. glusterfs-v3.6.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2015-April/021669.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user