Bug 1579757

Summary: DHT Log flooding in mount log "key=trusted.glusterfs.dht.mds [Invalid argument]"
Product: [Community] GlusterFS Reporter: Mohit Agrawal <moagrawa>
Component: distributeAssignee: Mohit Agrawal <moagrawa>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: 4.1CC: atumball, bugs, ksandha, rhinduja, rhs-bugs, sankarshan, sheggodu, storage-qa-internal, tdesala
Target Milestone: ---Keywords: Regression
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-v4.1.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1575895 Environment:
Last Closed: 2018-06-20 18:06:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1575895    
Bug Blocks: 1503137, 1575910, 1579755    

Description Mohit Agrawal 2018-05-18 09:14:15 UTC
+++ This bug was initially created as a clone of Bug #1575895 +++

Description of problem:
Log flooding of 
[2018-05-08 07:45:38.612289] I [dict.c:471:dict_get] (-->/usr/lib64/glusterfs/3.12.2/xlator/cluster/distribute.so(+0x209e0) [0x7f13d2dc09e0] -->/usr/lib64/glusterfs/3.12.2/xlator/cluster/distribute.so(+0x41605) [0x7f13d2de1605] -->/lib64/libglusterfs.so.0(dict_get+0x10c) [0x7f13e0e0dacc] ) 2-dict: !this || key=trusted.glusterfs.dht.mds [Invalid argument]

Version-Release number of selected component (if applicable):
3.12.2-8

How reproducible:
100%

Steps to Reproduce:
1. create a arbiter volume 

2. run fileop tool using mount -t nfs -o vers=4 rhsqe-repo.lab.eng.blr.redhat.com:/ /opt
/opt/qa/tools/system_light/run.sh -w /mnt/nfsv4 -t fileop -l /var/tmp/out.log

3. Check the mount log

Actual results:
Logg flooding with 
[2018-05-08 07:45:38.612289] I [dict.c:471:dict_get] (-->/usr/lib64/glusterfs/3.12.2/xlator/cluster/distribute.so(+0x209e0) [0x7f13d2dc09e0] -->/usr/lib64/glusterfs/3.12.2/xlator/cluster/distribute.so(+0x41605) [0x7f13d2de1605] -->/lib64/libglusterfs.so.0(dict_get+0x10c) [0x7f13e0e0dacc] ) 2-dict: !this || key=trusted.glusterfs.dht.mds [Invalid argument]


Expected results:
No log flooding should be there

Additional info:
SOs reports at newtsuit under BZ_report/<bug-id>

--- Additional comment from Red Hat Bugzilla Rules Engine on 2018-05-08 04:28:22 EDT ---

This bug is automatically being proposed for the release of Red Hat Gluster Storage 3 under active development and open for bug fixes, by setting the release flag 'rhgs‑3.4.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Red Hat Bugzilla Rules Engine on 2018-05-08 04:28:22 EDT ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from Red Hat Bugzilla Rules Engine on 2018-05-08 13:55:46 EDT ---

This bug is automatically being provided 'pm_ack+' for the release flag 'rhgs‑3.4.0', having been appropriately marked for the release, and having been provided ACK from Development and QE

--- Additional comment from Amar Tumballi on 2018-05-08 14:39:25 EDT ---

I guess this is the fix? https://review.gluster.org/19981

--- Additional comment from Red Hat Bugzilla Rules Engine on 2018-05-08 14:39:30 EDT ---

Since this bug has has been approved for the RHGS 3.4.0 release of Red Hat Gluster Storage 3, through release flag 'rhgs-3.4.0+', and through the Internal Whiteboard entry of '3.4.0', the Target Release is being automatically set to 'RHGS 3.4.0'

--- Additional comment from Sunil Kumar Acharya on 2018-05-09 08:16:23 EDT ---

Downstream Patch: https://code.engineering.redhat.com/gerrit/#/c/138205/

--- Additional comment from Red Hat PnT DevOps Automation on 2018-05-09 11:46:24 EDT ---

Bug report changed to ON_QA status by bugzilla-updater. Bug has been added to an advisory.
https://errata.devel.redhat.com/advisory/32725

--- Additional comment from Prasad Desala on 2018-05-17 06:23:09 EDT ---

Verified this BZ on glusterfs version glusterfs-3.12.2-10.

Followed below steps:
1) Created a arbiter volume and started it.
2) FUSE mounted on a client.
3) Ran fileop tool using mount -t nfs -o vers=4 rhsqe-repo.lab.eng.blr.redhat.com:/ /opt
/opt/qa/tools/system_light/run.sh -w /mnt/arbiter -t fileop -l /var/tmp/out.log
4) While the script in step-3 is in-progress, added 3 bricks to the arbiter volume.

In client mount logs, didn't see flooding of log messages "-dict: !this || key=trusted.glusterfs.dht.mds [Invalid argument]". These messages are only seen 3 times.

Moving this BZ to Verified.

Comment 2 Worker Ant 2018-05-22 04:54:02 UTC
REVIEW: https://review.gluster.org/20040 (dht: Avoid dict log flooding for internal MDS xattr) posted (#2) for review on release-4.1 by N Balachandran

Comment 3 Worker Ant 2018-05-22 10:25:24 UTC
COMMIT: https://review.gluster.org/20040 committed in release-4.1 by "N Balachandran" <nbalacha> with a commit message- dht: Avoid dict log flooding for internal MDS xattr

Problem: Before populate MDS internal xattr first dht checks if MDS is
         present in xattr or not.If xattr dictionary is NULL dict_get
         log the message either dict or key is NULL

Solution: Before call dict_get check xattr, if it is NULL then no
          need to call dict_get.


BUG: 1579757
Change-Id: I81604ec5945b85eba14b42f4583d06ec713028f4
Signed-off-by: Mohit Agrawal <moagrawa>

Comment 4 Shyamsundar 2018-06-20 18:06:39 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/