Bug 1596787 - glusterfs rpc-clnt.c: error returned while attempting to connect to host: (null), port 0
Summary: glusterfs rpc-clnt.c: error returned while attempting to connect to host: (nu...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: mainline
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-29 17:45 UTC by kcao22003
Modified: 2019-03-25 16:30 UTC (History)
5 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-25 16:30:27 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 21895 0 None Open quotad: fix passing GF_DATA_TYPE_STR_OLD dict data to v4 protocol 2019-03-04 09:13:43 UTC
Gluster.org Gerrit 21897 0 None Open rpc-clnt: reduce transport connect log for EINPROGRESS 2019-01-07 03:20:08 UTC

Description kcao22003 2018-06-29 17:45:00 UTC
Hello,

We are currently using gluster version 4.0.0. We have a gluster volume with quota being enabled and a limit usage is set to a arbitrary number such as 1GB.

We see lots of warning messages like the following in /var/log/glusterfs/bricks/mnt-[xxx]-[VOLUME_NAME].log:

W [rpc-clnt.c:1739:rpc_clnt_submit ] 0-<VOLUME_NAME>-quota: error returned while attempting to
connect to host: (null), port 0

It is the same message and it is written to the log file every 15 seconds.

Does anyone know what is going on or any issue that might be related to this?

Thank you for your time and support.


Additional info:

1. /var/log/glusterfs/quotad.log

The following are logging messages in the quotad log (/var/log/glusterfs/quotad.log):

The message "W [MSGID 101016] [glusterfs3.h:743: dict-to-xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]" repeated 132 times between [<TIMESTAMP1>] and [<TIMESTAMP2>]

The message "W [MSGID 101016] [glusterfs3.h:743: dict-to-xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]" repeated 132 times between [<TIMESTAMP3>] and [<TIMESTAMP4>]

[<TIMESTAMP5>] W [MSGID 101016] [glusterfs3.h:743: dict-to-xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]
[<TIMESTAMP6>] W [MSGID 101016] [glusterfs3.h:743: dict-to-xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]

These were repeated over and over...

2. ps -ef | grep quotad
   
  root 100 1 0  Jun28 ? 00:00:21 /usr/sbin/glusterfs -s localhost -volfile-id gluster/quotad -p /var/run/gluster/quotad/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/gluster/<SOME_ID>.socket --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off  --process-name quotad

3. gluster v status

   Status of volume: MyVol
   Gluster Process               TCP Port    RDMA Port   Online   PID
   --------------------------------------------------------
   Brick server1:/mnt/g1/MyVol   49152         0           Y      109
   Brick server2:/mnt/g1/MyVol   49152         0           Y      109
   Brick server3:/mnt/g1/MyVol   49152         0           Y      109
   Self-heal Daemon on localhost NA            NA          Y      91
   Quota-Daemon on localhost     NA            NA          Y      100
   Self-heal Daemon on server1   NA            NA          Y      91
   Quota-Daemon on server1       NA            NA          Y      100
   Self-heal Daemon on server2   NA            NA          Y      91
   Quota-Daemon on server2       NA            NA          Y      1

Comment 2 kcao22003 2018-07-24 15:52:15 UTC
Does anyone have any status update regarding this bug/issue?

Comment 3 Shyamsundar 2018-10-23 14:53:52 UTC
Release 3.12 has been EOLd and this bug was still found to be in the NEW state, hence moving the version to mainline, to triage the same and take appropriate actions.

Comment 4 mabi 2018-11-02 18:18:09 UTC
I have the same issue running GlusterFS 4.1.5 on Debian 9. I also reported this issue a few days ago on the gluster-users mailing list.

Comment 5 Worker Ant 2018-12-20 03:39:48 UTC
REVIEW: https://review.gluster.org/21895 (rpc: encode/decode GF_DATA_TYPE_STR_OLD dict value as GF_DATA_TYPE_STR) posted (#1) for review on master by Kinglong Mee

Comment 6 Worker Ant 2018-12-20 09:06:44 UTC
REVIEW: https://review.gluster.org/21897 (rpc-clnt: reduce transport connect log for EINPROGRESS) posted (#1) for review on master by Kinglong Mee

Comment 7 Worker Ant 2019-01-07 03:20:08 UTC
REVIEW: https://review.gluster.org/21897 (rpc-clnt: reduce transport connect log for EINPROGRESS) posted (#6) for review on master by Amar Tumballi

Comment 8 Worker Ant 2019-03-04 09:13:44 UTC
REVIEW: https://review.gluster.org/21895 (quotad: fix passing GF_DATA_TYPE_STR_OLD dict data to v4 protocol) merged (#10) on master by Amar Tumballi

Comment 9 Shyamsundar 2019-03-25 16:30:27 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.