Bug 1686461 - Quotad.log filled with 0-dict is not sent on wire [Invalid argument] messages
Summary: Quotad.log filled with 0-dict is not sent on wire [Invalid argument] messages
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: 4.1
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-07 13:57 UTC by ryan
Modified: 2019-06-18 03:55 UTC (History)
3 users (show)

Fixed In Version: glusterfs-6.x
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-17 06:31:23 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description ryan 2019-03-07 13:57:06 UTC
Description of problem:
[2019-03-07 13:51:35.683031] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'max_count' is not sent on wire [Invalid argument]
[2019-03-07 13:51:35.683056] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'quota-list-count' is not sent on wire [Invalid argument]
[2019-03-07 13:51:35.683057] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'gfid' is not sent on wire [Invalid argument]
The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'default-soft-limit' is not sent on wire [Invalid argument]" repeated 2 times between [2019-03-07 13:51:35.682913] and [2019-03-07 13:51:35.683120]
[2019-03-07 13:51:35.683121] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]
[2019-03-07 13:51:35.683152] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'max_count' is not sent on wire [Invalid argument]
[2019-03-07 13:51:35.683153] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'quota-list-count' is not sent on wire [Invalid argument]

Version-Release number of selected component (if applicable):
4.1.7

How reproducible:
Everytime

Steps to Reproduce:
1.Turn on quotas for volume
2. Tail quotad.log log

Actual results:
Log is filled with constantly repeating messages as above. System drive fills and Glusterd daemon fails

Expected results:
Logs are not filled with these messages


Additional info:

Comment 1 Frank Rühlemann 2019-04-15 11:28:25 UTC
We experience the same problem:

less /var/log/quotad.log
…
[2019-04-14 04:26:40.249481] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]
[2019-04-14 04:26:40.249513] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]
The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]" repeated 80088 times between [2019-04-14 04:26:40.249481] and [2019-04-14 04:28:40.120911]
The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]" repeated 80088 times between [2019-04-14 04:26:40.249513] and [2019-04-14 04:28:40.120912]
[2019-04-14 04:28:40.122513] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]
[2019-04-14 04:28:40.122545] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]
The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]" repeated 37540 times between [2019-04-14 04:28:40.122513] and [2019-04-14 04:30:39.718206]
The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]" repeated 37540 times between [2019-04-14 04:28:40.122545] and [2019-04-14 04:30:39.718207]
[2019-04-14 04:30:42.000322] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]
[2019-04-14 04:30:42.000350] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]
The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]" repeated 49409 times between [2019-04-14 04:30:42.000322] and [2019-04-14 04:32:40.046852]
The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]" repeated 49409 times between [2019-04-14 04:30:42.000350] and [2019-04-14 04:32:40.046853]
[2019-04-14 04:32:40.345351] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]
[2019-04-14 04:32:40.345382] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]
The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]" repeated 29429 times between [2019-04-14 04:32:40.345351] and [2019-04-14 04:34:40.098453]
The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]" repeated 29429 times between [2019-04-14 04:32:40.345382] and [2019-04-14 04:34:40.098454]
…

And more in every second.

We updated from Gluster 3.12.14 to 4.1.7 at March 21st and that's the first appearance of these log mesages:

zless quotad.4.log
…
[2019-03-21 15:28:45.195091] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]
[2019-03-21 15:28:45.195100] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]
[2019-03-21 15:28:45.195103] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]
[2019-03-21 15:28:45.195155] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]
[2019-03-21 15:28:45.195268] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]
[2019-03-21 15:28:45.195305] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]
…




Now some typical information about our system.

# cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

# gluster --version
glusterfs 4.1.7

# gluster volume info $VOLUME

Volume Name: $VOLUME
Type: Distributed-Disperse
Volume ID: 0a64f278-f432-4793-8188-346557a4e146
Status: Started
Snapshot Count: 0
Number of Bricks: 15 x (4 + 2) = 90
Transport-type: tcp
Bricks:
Brick1: gluster01.:/srv/glusterfs/bricks/$VOLUME0100/data
Brick2: gluster01.:/srv/glusterfs/bricks/$VOLUME0101/data
Brick3: gluster02.:/srv/glusterfs/bricks/$VOLUME0200/data
Brick4: gluster02.:/srv/glusterfs/bricks/$VOLUME0201/data
Brick5: gluster05.:/srv/glusterfs/bricks/$VOLUME0500/data
Brick6: gluster05.:/srv/glusterfs/bricks/$VOLUME0501/data
Brick7: gluster01.:/srv/glusterfs/bricks/$VOLUME0102/data
Brick8: gluster01.:/srv/glusterfs/bricks/$VOLUME0103/data
Brick9: gluster02.:/srv/glusterfs/bricks/$VOLUME0202_new/data
Brick10: gluster02.:/srv/glusterfs/bricks/$VOLUME0203/data
Brick11: gluster05.:/srv/glusterfs/bricks/$VOLUME0502/data
Brick12: gluster05.:/srv/glusterfs/bricks/$VOLUME0503/data
Brick13: gluster01.:/srv/glusterfs/bricks/$VOLUME0104/data
Brick14: gluster01.:/srv/glusterfs/bricks/$VOLUME0105/data
Brick15: gluster02.:/srv/glusterfs/bricks/$VOLUME0204/data
Brick16: gluster02.:/srv/glusterfs/bricks/$VOLUME0205/data
Brick17: gluster05.:/srv/glusterfs/bricks/$VOLUME0504/data
Brick18: gluster05.:/srv/glusterfs/bricks/$VOLUME0505/data
Brick19: gluster01.:/srv/glusterfs/bricks/$VOLUME0106/data
Brick20: gluster01.:/srv/glusterfs/bricks/$VOLUME0107/data
Brick21: gluster02.:/srv/glusterfs/bricks/$VOLUME0206/data
Brick22: gluster02.:/srv/glusterfs/bricks/$VOLUME0207/data
Brick23: gluster05.:/srv/glusterfs/bricks/$VOLUME0506/data
Brick24: gluster05.:/srv/glusterfs/bricks/$VOLUME0507/data
Brick25: gluster01.:/srv/glusterfs/bricks/$VOLUME0108/data
Brick26: gluster01.:/srv/glusterfs/bricks/$VOLUME0109/data
Brick27: gluster02.:/srv/glusterfs/bricks/$VOLUME0208/data
Brick28: gluster02.:/srv/glusterfs/bricks/$VOLUME0209/data
Brick29: gluster05.:/srv/glusterfs/bricks/$VOLUME0508/data
Brick30: gluster05.:/srv/glusterfs/bricks/$VOLUME0509/data
Brick31: gluster01.:/srv/glusterfs/bricks/$VOLUME0110/data
Brick32: gluster01.:/srv/glusterfs/bricks/$VOLUME0111/data
Brick33: gluster02.:/srv/glusterfs/bricks/$VOLUME0210/data
Brick34: gluster02.:/srv/glusterfs/bricks/$VOLUME0211/data
Brick35: gluster05.:/srv/glusterfs/bricks/$VOLUME0510/data
Brick36: gluster05.:/srv/glusterfs/bricks/$VOLUME0511/data
Brick37: gluster06.:/srv/glusterfs/bricks/$VOLUME0600/data
Brick38: gluster06.:/srv/glusterfs/bricks/$VOLUME0601/data
Brick39: gluster07.:/srv/glusterfs/bricks/$VOLUME0700/data
Brick40: gluster07.:/srv/glusterfs/bricks/$VOLUME0701/data
Brick41: gluster08.:/srv/glusterfs/bricks/$VOLUME0800/data
Brick42: gluster08.:/srv/glusterfs/bricks/$VOLUME0801/data
Brick43: gluster06.:/srv/glusterfs/bricks/$VOLUME0602/data
Brick44: gluster06.:/srv/glusterfs/bricks/$VOLUME0603/data
Brick45: gluster07.:/srv/glusterfs/bricks/$VOLUME0702/data
Brick46: gluster07.:/srv/glusterfs/bricks/$VOLUME0703/data
Brick47: gluster08.:/srv/glusterfs/bricks/$VOLUME0802/data
Brick48: gluster08.:/srv/glusterfs/bricks/$VOLUME0803/data
Brick49: gluster06.:/srv/glusterfs/bricks/$VOLUME0604/data
Brick50: gluster06.:/srv/glusterfs/bricks/$VOLUME0605/data
Brick51: gluster07.:/srv/glusterfs/bricks/$VOLUME0704/data
Brick52: gluster07.:/srv/glusterfs/bricks/$VOLUME0705/data
Brick53: gluster08.:/srv/glusterfs/bricks/$VOLUME0804/data
Brick54: gluster08.:/srv/glusterfs/bricks/$VOLUME0805/data
Brick55: gluster06.:/srv/glusterfs/bricks/$VOLUME0606/data
Brick56: gluster06.:/srv/glusterfs/bricks/$VOLUME0607/data
Brick57: gluster07.:/srv/glusterfs/bricks/$VOLUME0706/data
Brick58: gluster07.:/srv/glusterfs/bricks/$VOLUME0707/data
Brick59: gluster08.:/srv/glusterfs/bricks/$VOLUME0806/data
Brick60: gluster08.:/srv/glusterfs/bricks/$VOLUME0807/data
Brick61: gluster06.:/srv/glusterfs/bricks/$VOLUME0608/data
Brick62: gluster06.:/srv/glusterfs/bricks/$VOLUME0609/data
Brick63: gluster07.:/srv/glusterfs/bricks/$VOLUME0708/data
Brick64: gluster07.:/srv/glusterfs/bricks/$VOLUME0709/data
Brick65: gluster08.:/srv/glusterfs/bricks/$VOLUME0808/data
Brick66: gluster08.:/srv/glusterfs/bricks/$VOLUME0809/data
Brick67: gluster06.:/srv/glusterfs/bricks/$VOLUME0610/data
Brick68: gluster06.:/srv/glusterfs/bricks/$VOLUME0611/data
Brick69: gluster07.:/srv/glusterfs/bricks/$VOLUME0710/data
Brick70: gluster07.:/srv/glusterfs/bricks/$VOLUME0711/data
Brick71: gluster08.:/srv/glusterfs/bricks/$VOLUME0810/data
Brick72: gluster08.:/srv/glusterfs/bricks/$VOLUME0811/data
Brick73: gluster01.:/srv/glusterfs/bricks/$VOLUME0112/data
Brick74: gluster01.:/srv/glusterfs/bricks/$VOLUME0113/data
Brick75: gluster02.:/srv/glusterfs/bricks/$VOLUME0212/data
Brick76: gluster02.:/srv/glusterfs/bricks/$VOLUME0213/data
Brick77: gluster05.:/srv/glusterfs/bricks/$VOLUME0512/data
Brick78: gluster05.:/srv/glusterfs/bricks/$VOLUME0513/data
Brick79: gluster01.:/srv/glusterfs/bricks/$VOLUME0114/data
Brick80: gluster01.:/srv/glusterfs/bricks/$VOLUME0115/data
Brick81: gluster02.:/srv/glusterfs/bricks/$VOLUME0214/data
Brick82: gluster02.:/srv/glusterfs/bricks/$VOLUME0215/data
Brick83: gluster05.:/srv/glusterfs/bricks/$VOLUME0514/data
Brick84: gluster05.:/srv/glusterfs/bricks/$VOLUME0515/data
Brick85: gluster01.:/srv/glusterfs/bricks/$VOLUME0116/data
Brick86: gluster01.:/srv/glusterfs/bricks/$VOLUME0117/data
Brick87: gluster02.:/srv/glusterfs/bricks/$VOLUME0216/data
Brick88: gluster02.:/srv/glusterfs/bricks/$VOLUME0217/data
Brick89: gluster05.:/srv/glusterfs/bricks/$VOLUME0516/data
Brick90: gluster05.:/srv/glusterfs/bricks/$VOLUME0517/data
Options Reconfigured:
performance.md-cache-timeout: 5
performance.rda-cache-limit: 100MB
performance.parallel-readdir: off
performance.cache-refresh-timeout: 1
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
auth.allow: $IPSPACE
transport.address-family: inet
nfs.disable: on

Comment 2 Amar Tumballi 2019-06-17 06:31:23 UTC
Hi Ryan,

We have marked these logs as DEBUG level from GlusterFS-6.0. Please upgrade to get the fixes.

Comment 3 ryan 2019-06-17 12:15:13 UTC
Hi Amar,

Thanks! Do you know when/if these will be backported into the 4.1 branch?

Best,
Ryan

Comment 4 Amar Tumballi 2019-06-18 03:55:36 UTC
https://review.gluster.org/#/q/I4157a7ec7d5ec9c2948b2bbc1e4cb8317f28d6b8 is the patch I am talking about. We had not ported the changes to 4.1 branch then. Considering we are getting good feedback about 6.x release (both performance, and stability), I would request you to consider an upgrade if that is possible. Because 4.1 would go out of active support in a month after glusterfs-7.0 release.


Note You need to log in before you can comment on or make changes to this bug.