Description of problem: Bulk remove xattr is internal fop that comes from afr metadata self-heal. some xattrs like removexattr("posix.system_acl_access"), removes more than one xattr on the file so removexattr on these xattrs will fail with either ENODATA/ENOATTR. Since all afr cares is removing of these xattrs and if they are already deleted, it can treat it as success. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/9049 (storage/posix: Treat ENODATA/ENOATTR as success in bulk removexattr) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/9049 (storage/posix: Treat ENODATA/ENOATTR as success in bulk removexattr) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/9049 committed in master by Vijay Bellur (vbellur) ------ commit b42255e87a06679b803e6bd83d02465d82c357b6 Author: Pranith Kumar K <pkarampu> Date: Wed Nov 5 09:04:50 2014 +0530 storage/posix: Treat ENODATA/ENOATTR as success in bulk removexattr Bulk remove xattr is internal fop in gluster. Some of the xattrs may have special behavior. Ex: removexattr("posix.system_acl_access"), removes more than one xattr on the file that could be present in the bulk-removal request. Removexattr of these deleted xattrs will fail with either ENODATA/ENOATTR. Since all this fop cares is removal of the xattrs in bulk-remove request and if they are already deleted, it can be treated as success. Change-Id: Id8f2a39b68ab763ec8b04cb71b47977647f22da4 BUG: 1160509 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/9049 Reviewed-by: Shyamsundar Ranganathan <srangana> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user