Bug 1192832

Summary: log files get flooded when removexattr() can't find a specified key or value
Product: [Community] GlusterFS Reporter: Vijay Bellur <vbellur>
Component: loggingAssignee: Vijay Bellur <vbellur>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.5.3CC: bernhard.glomm, bugs, foudoulisathanasios, gluster-bugs, ndevos
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.5.4 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1144527 Environment:
Last Closed: 2015-06-03 21:09:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1144527, 1205715    
Bug Blocks: 1141158, 1188064    

Description Vijay Bellur 2015-02-15 18:47:37 UTC
+++ This bug was initially created as a clone of Bug #1144527 +++

Description of problem:

I'm running 
#: glusterfs -V
#: glusterfs 3.4.5 built on Aug  6 2014 19:15:07
on ubuntu 14.04 from the semiosis ppa

I have a replica 2 with 2 servers.
another client does a fuse mount of a volume.
On rsyncing a bit of data onto the fuse mount,
 I get an entry like the below one on the client - for each file that is copied onto the volume

[2014-09-19 07:57:39.877806] W [client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-<volume_name>-client-0: remote operation failed: No data available
[2014-09-19 07:57:39.877963] W [client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-<volume_name>-client-1: remote operation failed: No data available
[2014-09-19 07:57:39.878462] W [fuse-bridge.c:1172:fuse_err_cbk] 0-glusterfs-fuse: 21741144: REMOVEXATTR() /<path_to_file>/.<file_name_with_a_leading_dot> => -1 (No data available)

The data itself is present and accessible on the volume and on both bricks.

So three questions:
a.) what kind of data is not available? what is the client complaining about?
b.) since it is a warning and the data seems to be okay - is there anything I need to fix?
c.) How can I get rid of the amount of log lines? it's more than 3GB/day..


Version-Release number of selected component (if applicable):

glusterfs 3.4.5 built on Aug  6 2014 19:15:07

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
an - so far unknown - xattribute key or value that should be removed couldn't be found for a specific file


Expected results:
As it is a warning message and the data SEEMS to be okay 
logging of this event should be reduced by a meaningful algory like:
if message density bigger than X;print short version.

Additional info:
bricks are running on zol, not xfs

--- Additional comment from Anand Avati on 2014-09-19 12:45:12 EDT ---

REVIEW: http://review.gluster.org/8781 (protocol: Log ENODATA & ENOATTR logs at DEBUG loglevel in removexattr_cbk.) posted (#4) for review on master by Vijay Bellur (vbellur)

--- Additional comment from Anand Avati on 2014-09-24 03:49:03 EDT ---

COMMIT: http://review.gluster.org/8781 committed in master by Vijay Bellur (vbellur) 
------
commit bd592f8b8379087604f35c3b377f6e94b9e1697d
Author: Vijay Bellur <vbellur>
Date:   Fri Sep 19 19:08:05 2014 +0530

    protocol: Log ENODATA & ENOATTR logs at DEBUG loglevel in removexattr_cbk.
    
    Prevents messages of the following type from being seen by default in the log files:
    
    [2014-09-19 07:57:39.877806] W
    [client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-<volume_name>-client-0:
    remote operation failed: No data available
    [2014-09-19 07:57:39.877963] W
    [client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-<volume_name>-client-1:
    remote operation failed: No data available
    
    Change-Id: I3b1a121b0fc272eb772547275bb8085ed19db5a1
    BUG: 1144527
    Signed-off-by: Vijay Bellur <vbellur>
    Reviewed-on: http://review.gluster.org/8781
    Reviewed-by: Niels de Vos <ndevos>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Jeff Darcy <jdarcy>

Comment 1 Anand Avati 2015-02-15 18:48:48 UTC
REVIEW: http://review.gluster.org/9655 (protocol: Log ENODATA & ENOATTR logs at DEBUG loglevel in removexattr_cbk.) posted (#1) for review on release-3.5 by Vijay Bellur (vbellur)

Comment 2 Anand Avati 2015-02-25 08:55:28 UTC
COMMIT: http://review.gluster.org/9655 committed in release-3.5 by Niels de Vos (ndevos) 
------
commit 3242a1f79abd27743da9ff3209be49df5474b4b3
Author: Vijay Bellur <vbellur>
Date:   Fri Sep 19 19:08:05 2014 +0530

    protocol: Log ENODATA & ENOATTR logs at DEBUG loglevel in removexattr_cbk.
    
    Prevents messages of the following type from being seen by default in the log files:
    
    [2014-09-19 07:57:39.877806] W
    [client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-<volume_name>-client-0:
    remote operation failed: No data available
    [2014-09-19 07:57:39.877963] W
    [client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-<volume_name>-client-1:
    remote operation failed: No data available
    
    Change-Id: I3b1a121b0fc272eb772547275bb8085ed19db5a1
    BUG: 1192832
    Signed-off-by: Vijay Bellur <vbellur>
    Reviewed-on: http://review.gluster.org/8781
    Reviewed-by: Niels de Vos <ndevos>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Jeff Darcy <jdarcy>
    Reviewed-on: http://review.gluster.org/9655

Comment 3 Niels de Vos 2015-06-03 21:09:14 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.4, please reopen this bug report.

glusterfs-3.5.4 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.packaging/2
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 4 Anoop C S 2015-09-24 06:56:13 UTC
*** Bug 1245966 has been marked as a duplicate of this bug. ***