Bug 1114847 - glusterd logs are filled with "readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)"
Summary: glusterd logs are filled with "readv on /var/run/a30ad20ae7386a2fe58445b1a2b...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: transport
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Vijay Bellur
QA Contact:
URL:
Whiteboard: SNAPSHOT
Depends On: 1113954
Blocks: 1310969
TreeView+ depends on / blocked
 
Reported: 2014-07-01 07:02 UTC by krishnan parthasarathi
Modified: 2016-06-16 16:17 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8.0
Clone Of: 1113954
: 1310969 (view as bug list)
Environment:
Last Closed: 2016-06-16 12:38:43 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description krishnan parthasarathi 2014-07-01 07:02:35 UTC
+++ This bug was initially created as a clone of Bug #1113954 +++

Description of problem:
=======================

When a brick is brought down following message is logged every 3 sec in glusterd logs:

[2014-06-27 10:28:12.693185] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)
[2014-06-27 10:28:15.694036] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)
[2014-06-27 10:28:18.694114] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)
[2014-06-27 10:28:21.694459] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)
[2014-06-27 10:28:24.694963] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)
[2014-06-27 10:28:27.695196] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)
[2014-06-27 10:28:30.696703] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)
[2014-06-27 10:28:33.696101] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)
[2014-06-27 10:28:36.696439] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)
[2014-06-27 10:28:39.697021] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)

If a brick is down for weeks, then chances are there that these logs will filled the root space and make the system unusable


Version-Release number of selected component (if applicable):
=============================================================

mainline


How reproducible:
=================
1/1


Steps to Reproduce:
====================
1. Create 4 node cluster system
2. Create a volume vol0 (2*2) from 4 node cluster
3. Create a snapshot of a volume
4. Create another volume vol4 (2*3) from 3 nodes of a cluster.
5. Bring down one of the brick from vol4 (I brought down brick participating in the third node)

Actual results:
===============
[2014-06-27 10:28:39.697021] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)

Frequency of logs is every 3sec, which has a risk to crash the complete system. The logs/failure needs investigation.

Comment 1 Anand Avati 2014-07-01 07:03:44 UTC
REVIEW: http://review.gluster.org/8210 (socket: reduce rate of readv failure logs due to disconnect) posted (#1) for review on master by Krishnan Parthasarathi (kparthas)

Comment 2 Anand Avati 2014-07-16 14:08:43 UTC
REVIEW: http://review.gluster.org/8210 (socket: reduce rate of readv failure logs due to disconnect) posted (#2) for review on master by Krishnan Parthasarathi (kparthas)

Comment 3 Anand Avati 2014-07-17 06:47:31 UTC
REVIEW: http://review.gluster.org/8210 (socket: reduce rate of readv failure logs due to disconnect) posted (#3) for review on master by Krishnan Parthasarathi (kparthas)

Comment 6 Vijay Bellur 2016-01-13 07:08:20 UTC
REVIEW: http://review.gluster.org/8210 (socket: reduce rate of readv failure logs due to disconnect) posted (#4) for review on master by Atin Mukherjee (amukherj)

Comment 7 Vijay Bellur 2016-02-22 12:05:13 UTC
REVIEW: http://review.gluster.org/8210 (socket: reduce rate of readv failure logs due to disconnect) posted (#5) for review on master by Atin Mukherjee (amukherj)

Comment 8 Vijay Bellur 2016-02-22 18:10:58 UTC
COMMIT: http://review.gluster.org/8210 committed in master by Raghavendra G (rgowdapp) 
------
commit 27c09b9357004e5fdb02fdf0c586f3402878db1f
Author: Krishnan Parthasarathi <kparthas>
Date:   Mon Jun 30 11:26:54 2014 +0530

    socket: reduce rate of readv failure logs due to disconnect
    
    ... by using GF_LOG_OCCASIONALLY
    
    Change-Id: I779ff32ead13c8bb446a57b5baccf068ae992df1
    BUG: 1114847
    Signed-off-by: Krishnan Parthasarathi <kparthas>
    Reviewed-on: http://review.gluster.org/8210
    Tested-by: Atin Mukherjee <amukherj>
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 9 Niels de Vos 2016-06-16 12:38:43 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2016-06-16 16:17:49 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.