Bug 1179640 - Enable quota(default) leads to heal directory's xattr failed.
Summary: Enable quota(default) leads to heal directory's xattr failed.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1178590
TreeView+ depends on / blocked
 
Reported: 2015-01-07 09:19 UTC by Pranith Kumar K
Modified: 2015-05-14 17:45 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1178590
Environment:
Last Closed: 2015-05-14 17:28:51 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Pranith Kumar K 2015-01-07 09:19:09 UTC
+++ This bug was initially created as a clone of Bug #1178590 +++

Description of problem:
I create a disperse 3 redundancy 1 volume, start, then enable quota. I down one brick, copy some files and directories to mountpooint, then restart thart brick. exec 
find /cluster2/test/ -d -exec getfattr -h -n trusted.ec.heal {} \; (I have mager this patch)
All REG files is OK, trusted.ec.heal="Good: 111, Bad: 000".
but All DIR is failed, trusted.ec.heal="Good: 110, Bad: 000"
exec find /cluster2/test/ -d -exec getfattr -h -n trusted.ec.heal {} \; again, DIRs also can not be healed.

Then I disable quota, exec  find /cluster2/test/ -d -exec getfattr -h -n trusted.ec.heal {} \;, All DIRs are OK.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.create a disperse 3 redundancy 1 volume
2.enable quota default.
3.down one brick.
4.copy some files and directories to mountpoint.
5.restart that brick.
6.find /cluster2/test/ -d -exec getfattr -h -n trusted.ec.heal {} \;
7.All REGs are OK, but All DIRs are failed. 

Actual results:
Heal DIR is failed when quota is enbale.

Expected results:
Heal DIR is OK when quota is enbale.


Additional info:

--- Additional comment from jiademing on 2015-01-04 21:51:08 EST ---

Bricks error log:

[2014-12-26 10:51:40.116480] E [quota.c:3303:quota_setxattr] 0-test-quota: attempt to set internal xattr: trusted.digioceanfs.quota*: Operation not permitted

--- Additional comment from Jeff Darcy on 2015-01-06 08:11:01 EST ---

jiademing, I can't find the patch you mentioned.  Could you please post it to review.gluster.org or (less optimally) attach it here?  Thanks.

This might relate to (or interact with) http://review.gluster.org/#/c/9385/

--- Additional comment from Anand Avati on 2015-01-07 04:02:17 EST ---

REVIEW: http://review.gluster.org/9401 (cluster/ec: Do not modify quota, selinux xattrs in healing) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

--- Additional comment from Pranith Kumar K on 2015-01-07 04:03:26 EST ---

Patch above fixes the issue reported in the 'steps'. There is still more to the bug where quota-limit xattr is not allowed to be healed unless the pid of the process doing it is -ve i.e. internal process. I will be sending that patch once I discuss the solution with Xavi.

--- Additional comment from jiademing on 2015-01-07 04:12:04 EST ---

(In reply to Jeff Darcy from comment #2)
> jiademing, I can't find the patch you mentioned.  Could you please post it
> to review.gluster.org or (less optimally) attach it here?  Thanks.
> 
> This might relate to (or interact with) http://review.gluster.org/#/c/9385/

I meger this patch http://review.gluster.org/#/c/9072/, so I can use "find /cluster2/test/ -d -exec getfattr -h -n trusted.ec.heal {} \;" to heal the files and dirs.

I view that patch http://review.gluster.org/#/c/9385/, it reject user to modify the xattrs of ec, but can not reslove above problem.

ec want to heal xattrs of DIRS, but the quota translater doesn't allow ec to modify some xattr (like trusted.glusterfs.quota.dirty), so lead to heal xattrs failed.

Bricks error log:
[2014-12-26 10:51:40.116480] E [quota.c:3303:quota_setxattr] 0-test-quota: attempt to set internal xattr: trusted.glusterfs.quota*: Operation not permitted

Comment 1 Anand Avati 2015-01-07 09:20:40 UTC
REVIEW: http://review.gluster.org/9401 (cluster/ec: Do not modify quota, selinux xattrs in healing) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 2 Anand Avati 2015-01-07 10:38:46 UTC
REVIEW: http://review.gluster.org/9401 (cluster/ec: Do not modify quota, selinux xattrs in healing) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 3 Anand Avati 2015-01-09 05:55:15 UTC
COMMIT: http://review.gluster.org/9401 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit cf0770c61af2fa49fa435baf62cd5f28569175e4
Author: Pranith Kumar K <pkarampu>
Date:   Wed Jan 7 12:08:48 2015 +0530

    cluster/ec: Do not modify quota, selinux xattrs in healing
    
    Problem:
    EC heal tries to heal quota-size, selinux xattrs as well.  quota-size is
    private to the brick but since quotad accesses them using the standard
    interface as well, they can not be filtered in the fops.
    
    Fix:
    Ignore QUOTA_SIZE_KEY and SELINUX xattrs during heal.
    
    Change-Id: I1572f9e2fcba7f120b4265e034953a15ff297f04
    BUG: 1179640
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/9401
    Reviewed-by: Xavier Hernandez <xhernandez>
    Tested-by: Gluster Build System <jenkins.com>

Comment 4 Niels de Vos 2015-05-14 17:28:51 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:35:47 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:38:09 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:45:36 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.