Hello, We are currently using gluster version 4.0.0. We have a gluster volume with quota being enabled and a limit usage is set to a arbitrary number such as 1GB. We see lots of warning messages like the following in /var/log/glusterfs/bricks/mnt-[xxx]-[VOLUME_NAME].log: W [rpc-clnt.c:1739:rpc_clnt_submit ] 0-<VOLUME_NAME>-quota: error returned while attempting to connect to host: (null), port 0 It is the same message and it is written to the log file every 15 seconds. Does anyone know what is going on or any issue that might be related to this? Thank you for your time and support. Additional info: 1. /var/log/glusterfs/quotad.log The following are logging messages in the quotad log (/var/log/glusterfs/quotad.log): The message "W [MSGID 101016] [glusterfs3.h:743: dict-to-xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]" repeated 132 times between [<TIMESTAMP1>] and [<TIMESTAMP2>] The message "W [MSGID 101016] [glusterfs3.h:743: dict-to-xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]" repeated 132 times between [<TIMESTAMP3>] and [<TIMESTAMP4>] [<TIMESTAMP5>] W [MSGID 101016] [glusterfs3.h:743: dict-to-xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument] [<TIMESTAMP6>] W [MSGID 101016] [glusterfs3.h:743: dict-to-xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument] These were repeated over and over... 2. ps -ef | grep quotad root 100 1 0 Jun28 ? 00:00:21 /usr/sbin/glusterfs -s localhost -volfile-id gluster/quotad -p /var/run/gluster/quotad/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/gluster/<SOME_ID>.socket --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off --process-name quotad 3. gluster v status Status of volume: MyVol Gluster Process TCP Port RDMA Port Online PID -------------------------------------------------------- Brick server1:/mnt/g1/MyVol 49152 0 Y 109 Brick server2:/mnt/g1/MyVol 49152 0 Y 109 Brick server3:/mnt/g1/MyVol 49152 0 Y 109 Self-heal Daemon on localhost NA NA Y 91 Quota-Daemon on localhost NA NA Y 100 Self-heal Daemon on server1 NA NA Y 91 Quota-Daemon on server1 NA NA Y 100 Self-heal Daemon on server2 NA NA Y 91 Quota-Daemon on server2 NA NA Y 1
Does anyone have any status update regarding this bug/issue?
Release 3.12 has been EOLd and this bug was still found to be in the NEW state, hence moving the version to mainline, to triage the same and take appropriate actions.
I have the same issue running GlusterFS 4.1.5 on Debian 9. I also reported this issue a few days ago on the gluster-users mailing list.
REVIEW: https://review.gluster.org/21895 (rpc: encode/decode GF_DATA_TYPE_STR_OLD dict value as GF_DATA_TYPE_STR) posted (#1) for review on master by Kinglong Mee
REVIEW: https://review.gluster.org/21897 (rpc-clnt: reduce transport connect log for EINPROGRESS) posted (#1) for review on master by Kinglong Mee
REVIEW: https://review.gluster.org/21897 (rpc-clnt: reduce transport connect log for EINPROGRESS) posted (#6) for review on master by Amar Tumballi
REVIEW: https://review.gluster.org/21895 (quotad: fix passing GF_DATA_TYPE_STR_OLD dict data to v4 protocol) merged (#10) on master by Amar Tumballi
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/