Bug 1246736 - client3_3_removexattr_cbk floods the logs with "No data available" messages
Summary: client3_3_removexattr_cbk floods the logs with "No data available" messages
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: logging
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Niels de Vos
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1246728
TreeView+ depends on / blocked
 
Reported: 2015-07-25 08:30 UTC by Niels de Vos
Modified: 2016-06-16 13:26 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1246728
Environment:
Last Closed: 2016-06-16 13:26:51 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Niels de Vos 2015-07-25 08:30:18 UTC
+++ This bug was initially created as a clone of Bug #1246728 +++

Similar problem to https://bugzilla.redhat.com/show_bug.cgi?id=1188064

With log-level set to ERROR, I still get logs like this

[2015-07-25 02:06:11.206465]  [MSGID: 114031] [client-rpc-fops.c:1300:client3_3_removexattr_cbk] 0-gv0-client-0: remote operation failed: No data available [No data available]                   

I notice that xlators/protocol/client/src/client-rpc-fops.c line 1293 of 3.7.2 source, loglevel is set to 0.  The 3.6.4 source set this to GF_LOG_DEBUG.

Comment 1 Anand Avati 2015-07-25 08:46:56 UTC
REVIEW: http://review.gluster.org/11759 (logging: client3_3_removexattr_cbk should not log expected ENODATA) posted (#1) for review on master by Niels de Vos (ndevos)

Comment 2 Niels de Vos 2016-06-16 13:26:51 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.