+++ This bug was initially created as a clone of Bug #1248941 +++ Description of problem: REMOVEXATTR No data available in log when files are written to glusterfs mount Version-Release number of selected component (if applicable): 3.7.2 How reproducible: very Steps to Reproduce: 1. write file in glusterfs mount (# touch /ssd_data/test) 2. 3. Actual results: # tail -n 1 /var/log/glusterfs/ssd_data.log [2015-07-31 07:51:07.354206] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 362196675: REMOVEXATTR() /test => -1 (No data available) Expected results: no error in log Additional info:
REVIEW: http://review.gluster.org/12015 (mount/fuse: Avoid logging ENODATA in {f}removexattr) posted (#1) for review on master by Vijay Bellur (vbellur)
REVIEW: http://review.gluster.org/12015 (mount/fuse: Log ENODATA as DEBUG in {f}removexattr) posted (#2) for review on master by Vijay Bellur (vbellur)
COMMIT: http://review.gluster.org/12015 committed in master by Raghavendra Bhat (raghavendra) ------ commit bd9dd34700de63f96b9fc65125d539b2c16fa6bf Author: Vijay Bellur <vbellur> Date: Wed Aug 26 15:24:39 2015 +0530 mount/fuse: Log ENODATA as DEBUG in {f}removexattr Logging ENODATA errors for {f}removexattr at a higher loglevel does not add a lot of value and causes a log message flood as per multiple reports. Added a new cbk, fuse_removexattr_cbk() to be used with removexattr fops. ENODATA now gets logged at loglevel DEBUG in fuse_removexattr_cbk(). This also prevents more conditional checks in the common fuse_err_cbk() callback. Change-Id: I1585b4d627e0095022016c47d7fd212018a7194b BUG: 1257110 Signed-off-by: Vijay Bellur <vbellur> Reviewed-on: http://review.gluster.org/12015 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Raghavendra Bhat <raghavendra>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user